-
Notifications
You must be signed in to change notification settings - Fork 334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helicone helpers async #3367
Helicone helpers async #3367
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR Summary
This PR introduces significant updates to Helicone's manual logging functionality, focusing on improved async support and enhanced logging capabilities across Python and TypeScript SDKs.
- Changed default logging endpoint to
api.worker.helicone.ai
and added time-to-first-token tracking in both Python and TypeScript SDKs - Added support for string responses via new
textBody
field inProviderResponse
type - Introduced
LoggingOptions
TypedDict in Python SDK for better type safety and timing control - Fixed TogetherAI stream processor error messages and added Kafka message handling based on
KAFKA_ENABLED
flag - Critical bug:
log_request
method calls non-existent__send_log
after rename tosend_log
in Python SDK
12 file(s) reviewed, 11 comment(s)
Edit PR Review Bot Settings | Greptile
# Create a streaming response | ||
response = client.chat.completions.create(**request) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logic: Request object is accessed directly instead of using result_recorder.request as shown in the basic example
"stream_options": { | ||
"include_usage": True | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logic: stream_options with include_usage is not a documented OpenAI parameter and may not work as expected
"stream_options": { | |
"include_usage": True | |
} | |
} | |
} |
helicone_logger.send_log( | ||
provider="openai", | ||
request=request_body, | ||
response="\n".join(chunks), # Join chunks or process as needed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style: Joining chunks with newlines may produce invalid JSON structure for the response
request={"model": "gpt-4", "messages": [...]}, | ||
response={"choices": [...]}, # Can also be a string for text responses |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style: example uses ellipsis [...] which may confuse users - should show actual message structure
request={"model": "gpt-4", "messages": [...]}, | |
response={"choices": [...]}, # Can also be a string for text responses | |
request={"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}, | |
response={"choices": [{"message": {"content": "Hi there!"}}]}, # Can also be a string for text responses |
"content": "Write a story about a robot" | ||
} | ||
], | ||
"stream": true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logic: stream: true in request but response uses textBody instead of streaming format
@@ -506,3 +506,4 @@ export async function POST(request: Request) { | |||
``` | |||
|
|||
For a comprehensive guide on using the Manual Logger with streaming functionality, check out our [Manual Logger with Streaming](/guides/cookbooks/manual-logger-streaming) cookbook. | |||
```` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
syntax: Extra backticks at end of file causing formatting issue
```` |
response: dict, | ||
options: dict | ||
response: Union[dict, str], | ||
options: LoggingOptions | ||
): | ||
start_time = options.get("start_time") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logic: Potential error: options.get()
used with TypedDict which doesn't support .get() - should use direct attribute access instead
@@ -109,7 +118,8 @@ def __send_log( | |||
"endTime": { | |||
"seconds": int(end_time), | |||
"milliseconds": int((end_time - int(end_time)) * 1000) | |||
} | |||
}, | |||
"timeToFirstToken": options.get("time_to_first_token") if options.get("time_to_first_token") is not None else None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logic: Inconsistent naming: time_to_first_token
vs time_to_first_token_ms
in LoggingOptions type definition
"timeToFirstToken": options.get("time_to_first_token") if options.get("time_to_first_token") is not None else None | |
"timeToFirstToken": options.get("time_to_first_token_ms") if options.get("time_to_first_token_ms") is not None else None |
if (KAFKA_ENABLED) { | ||
const kafkaProducer = new KafkaProducer(); | ||
|
||
if (res.error) { | ||
Sentry.captureException(new Error(res.error), { | ||
tags: { | ||
type: "KafkaError", | ||
topic: "request-response-logs-prod-dlq", | ||
}, | ||
extra: { | ||
requestId: logMessage.log.request.id, | ||
responseId: logMessage.log.response.id, | ||
orgId: handlerContext.orgParams?.id ?? "", | ||
batchId: batchContext.batchId, | ||
partition: batchContext.partition, | ||
offset: batchContext.lastOffset, | ||
messageCount: batchContext.messageCount, | ||
}, | ||
}); | ||
|
||
console.error( | ||
`Error sending message to DLQ: ${res.error} for request ${logMessage.log.request.id} in batch ${batchContext.batchId}` | ||
const res = await kafkaProducer.sendMessages( | ||
[logMessage], | ||
"request-response-logs-prod-dlq" | ||
); | ||
|
||
if (res.error) { | ||
Sentry.captureException(new Error(res.error), { | ||
tags: { | ||
type: "KafkaError", | ||
topic: "request-response-logs-prod-dlq", | ||
}, | ||
extra: { | ||
requestId: logMessage.log.request.id, | ||
responseId: logMessage.log.response.id, | ||
orgId: handlerContext.orgParams?.id ?? "", | ||
batchId: batchContext.batchId, | ||
partition: batchContext.partition, | ||
offset: batchContext.lastOffset, | ||
messageCount: batchContext.messageCount, | ||
}, | ||
}); | ||
|
||
console.error( | ||
`Error sending message to DLQ: ${res.error} for request ${logMessage.log.request.id} in batch ${batchContext.batchId}` | ||
); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style: Consider reusing KafkaProducer instance instead of creating new one for each error case
if (KAFKA_ENABLED) { | ||
const kafkaProducer = new KafkaProducer(); | ||
const kafkaResult = await kafkaProducer.sendMessages( | ||
logMessages, | ||
"request-response-logs-prod-dlq" | ||
); | ||
|
||
if (kafkaResult.error) { | ||
Sentry.captureException(new Error(kafkaResult.error), { | ||
tags: { | ||
type: "KafkaError", | ||
topic: "request-response-logs-prod-dlq", | ||
}, | ||
extra: { | ||
batchId: batchContext.batchId, | ||
partition: batchContext.partition, | ||
offset: batchContext.lastOffset, | ||
messageCount: batchContext.messageCount, | ||
}, | ||
}); | ||
if (kafkaResult.error) { | ||
Sentry.captureException(new Error(kafkaResult.error), { | ||
tags: { | ||
type: "KafkaError", | ||
topic: "request-response-logs-prod-dlq", | ||
}, | ||
extra: { | ||
batchId: batchContext.batchId, | ||
partition: batchContext.partition, | ||
offset: batchContext.lastOffset, | ||
messageCount: batchContext.messageCount, | ||
}, | ||
}); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style: Similar pattern of KafkaProducer instantiation - consider extracting producer creation and message sending to a reusable method
📝 Documentation updates detected! You can review documentation updates here |
Summary
|
No description provided.