-
-
Notifications
You must be signed in to change notification settings - Fork 531
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ChatAccordion and ChatCard #6598
Comments
Actually I think I have it reversed; the card should be the outer object while the accordion is the inner object. Screen.Recording.2024-03-27.at.11.05.46.PM.movimport time
import panel as pn
pn.extension()
def callback(contents, user, instance):
time.sleep(0.5)
card = pn.Card(
title="Thinking...", sizing_mode="stretch_width",
)
accordion = pn.Accordion(sizing_mode="stretch_width", active=[0], margin=0)
card.append(accordion)
yield card
first_card = pn.pane.Markdown("Thinking...")
accordion.append(("What are my meetings today?", first_card))
time.sleep(2)
first_card.object = "You have a meeting at 10:00 AM with the team and another at 2:00 PM with the client."
second_card = pn.pane.Markdown("Thinking...")
accordion.append(("What time is it right now", second_card))
time.sleep(1)
second_card.object = "It is currently 9:30 AM."
third_card = pn.pane.Markdown("Which meeting is next?")
accordion.append(("Which meeting is next?", third_card))
time.sleep(1)
third_card.object = "Your next meeting is at 10:00 AM with the team."
fourth_card = pn.pane.Markdown("Thinking...")
accordion.append(("How long until the next meeting?", fourth_card))
time.sleep(1)
fourth_card.object = "Your next meeting is in 30 minutes."
return "Your next meeting is in 30 minutes."
chat = pn.chat.ChatInterface(callback=callback)
chat.show()
chat.send("How long until my next meeting?") |
Why do we need new components? Why can we not just use the existing components as shown in the post just above? ps. I don't think you can mix |
Agreed, if existing components don't do what we need them to we should first look to see if we can improve them to achieve what we need rather than moving straight to adding new components. |
I'm all for making it easy to show intermediate steps via Cards and Accordions. That is very powerful. |
I think the context manager part is definitely a desired feature, which pn.Card does not support natively, but I'll explore this idea more with applicable examples and identify what's missing. |
Oh definitely not saying there aren't various improvements we could make but I just wanted to emphasize that we should always try to generalize existing components and concepts instead of adding new ones. That's not to say that's necessarily true here and I owe this issue a more detailed look and analysis. |
Okay, here's an actual version using an LLM: Screen.Recording.2024-03-28.at.8.47.57.PM.movTwo things I discovered:
from typing import List
from pydantic import Field, BaseModel
class Query(BaseModel):
"""Class representing a single question in a query plan."""
id: int = Field(..., description="Unique id of the query")
question: str = Field(
...,
description="Question asked using a question answering system",
)
dependencies: List[int] = Field(
default_factory=list,
description="List of sub questions that need to be answered before asking this question",
)
class QueryPlan(BaseModel):
"""Container class representing a tree of questions to ask a question answering system."""
query_graph: List[Query] = Field(
..., description="The query graph representing the plan"
)
def _dependencies(self, ids: List[int]) -> List[Query]:
"""Returns the dependencies of a query given their ids."""
return [q for q in self.query_graph if q.id in ids]
import instructor
from openai import OpenAI
# Apply the patch to the OpenAI client
# enables response_model keyword
client = instructor.patch(OpenAI())
def query_planner(question: str) -> QueryPlan:
PLANNING_MODEL = "gpt-4-0613"
messages = [
{
"role": "system",
"content": "You are a world class query planning algorithm capable ofbreaking apart questions into its dependency queries such that the answers can be used to inform the parent question. Do not answer the questions, simply provide a correct compute graph with good specific questions to ask and relevant dependencies. Before you call the function, think step-by-step to get a better understanding of the problem.",
},
{
"role": "user",
"content": f"Consider: {question}\nGenerate the correct query plan.",
},
]
root = client.chat.completions.create(
model=PLANNING_MODEL,
temperature=0,
response_model=instructor.Partial[QueryPlan],
messages=messages,
max_tokens=1000,
stream=True,
)
return root
import time
import panel as pn
from contextlib import contextmanager
pn.extension()
@contextmanager
def show_steps(instance):
try:
card = pn.Card(
title="Thinking...",
sizing_mode="stretch_width",
)
instance.stream(card)
yield card
finally:
card.title = card.objects[-1].objects[-1].object
def callback(contents, user, instance):
queries = {}
with show_steps(instance) as card:
chunked_plan = query_planner(
"What is the difference in populations of Canada and the Andrew's home country?"
)
for chunk in chunked_plan:
for query in chunk.model_dump()["query_graph"] or []:
if query.get("id") is None or hasattr(query, "question"):
continue
if query["id"] not in queries:
answer_md = pn.pane.Markdown("")
sub_card = pn.Card(
answer_md,
title=query["question"] or "",
sizing_mode="stretch_width",
)
queries[query["id"]] = sub_card
card.append(sub_card)
else:
sub_card = queries[query["id"]]
sub_card.title = query["question"] or ""
history = []
for sub_card in card:
# serialize is desperately needed here:
question_dict = {"role": "user", "content": sub_card.title}
history.append(question_dict)
messages = [
{
"role": "system",
"content": "Answer the following question to the best of your abilities.",
},
{
"role": "user",
"content": "Andrew lives in the US"
},
*history,
]
response = client.chat.completions.create(
model="gpt-3.5-turbo",
temperature=0,
messages=messages,
max_tokens=1000,
stream=True,
)
for chunk in response:
if chunk:
sub_card.objects[0].object += chunk.choices[0].delta.content or ""
history.append({"role": "assistant", "content": sub_card.objects[0].object})
chat = pn.chat.ChatInterface(callback=callback, callback_exception="verbose")
chat.show() |
Okay, I have a draft API available at #6617 now Screen.Recording.2024-04-01.at.6.00.35.PM.mov |
Edit: I realized I have the ChatAccordion and ChatStatusCard reversed below in the design, i.e. ChatCard should wrap ChatAccordion
The design should also mention how to integrate this functionality seamlessly in
callback
. My initial thought is yielding/returning a dict withintermediate=True
{"The text to stream", "intermediate": True}
creates a ChatMessage with these chat status cards. Maybe context manager too:Screen.Recording.2024-03-27.at.11.05.46.PM.mov
--
To improve user experience while LLMs are performing intermediate steps, I'd like to propose two new chat components, namely,
ChatAccordion
andChatStatusCard
.Unlike streaming text, character by character, these two components will improve perception of runtime by outputting each intermediate steps as
ChatStatusCard
s.Here's an example:

The LLM could output a step by step plan to achieve the outcome:
ChatStatusCard
is appended toChatAccordion
ChatStatusCard
which users may open at any time to viewOnce completed, it'll show the final result at the bottom.
The intermediate steps can also be hidden.
Here's the abstract version:
I believe the API would look like:
At the current moment, I'm having a hard time to figuring out:
ChatText
for streaming text?)Any feedback appreciated
The text was updated successfully, but these errors were encountered: