-
Notifications
You must be signed in to change notification settings - Fork 245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase Total tokens to 128K (currently 4K) #1186
Comments
We are considering adding these parameters and making them configurable in compose.yaml to support flexible setups. A PR will be created for this, and we will update the details here once the PR is ready. However, the models themselves currently don't support a 256K context length. And some hardware also have limitation to support large input token length and max output token length. We recommend exploring alternative approaches, such as chunking files or using recursive summarization techniques, to achieve optimal results within the current technical limitations. |
Hi all
Sorry for this question: not too familiar yet with the PR and issue filing.
How do I find the PR associated with an issue
Are they kinda linked together.
Any easy search mechanism?
Cheers
-Padma
From: Liang Lv ***@***.***>
Sent: Tuesday, November 26, 2024 11:27 PM
To: opea-project/GenAIExamples ***@***.***>
Cc: Apparao, Padma ***@***.***>; Author ***@***.***>
Subject: Re: [opea-project/GenAIExamples] Increase Total tokens to 256K (currently 4K) (Issue #1186)
We are considering adding these parameters and making them configurable in compose.yaml to support flexible setups. A PR will be created for this, and we will update the details here once the PR is ready.
However, the models themselves currently don't support a 256K context length. And some hardware also have limitation to support large input token length and max output token length. We recommend exploring alternative approaches, such as chunking files or using recursive summarization techniques, to achieve optimal results within the current technical limitations.
—
Reply to this email directly, view it on GitHub<#1186 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AL3WNHWUFBHQ6PJKC76SCKL2CVX3PAVCNFSM6AAAAABSLGXBI6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBTGEYDKOBWGM>.
You are receiving this because you authored the thread.Message ID: ***@***.******@***.***>>
|
@Padmaapparao Once a PR was created and referred to/mentioned this issue, it will link automatically. |
(Already merged) PRs for supporting longer documents (with current small token amounts are): |
Hi @Padmaapparao, as @eero-t mentioned, OPEA now offers multiple strategies to support long contexts for DocSum, including |
Closed for no active responses in the recent 30 days. Please feel free to reopen it if the PRs do not resolve the issue. |
For Doc Sum example as we will upload 100's of files, we need the input token length to be large and same with the output. Currently it is fixed at 4096 total, so if we upload even 1 large file, output token length for summarization will be only 32 tokens which is very very small for a summary.
need total tokens 128K, so we can get at least 16K-32K summary.
These values are hardcoded in compose.yaml. Need them to be parametrizable.
The text was updated successfully, but these errors were encountered: