-
Notifications
You must be signed in to change notification settings - Fork 444
[playground] format for messages is overly strict #7844
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@sanin17 Just to check my understanding, it sounds like the messages above are produced by LangChain, and you are attempting to replay those messages in playground? Out of curiosity, do you have a LangChain code snippet that produces those messages? I am surprised that messages of that format are considered valid. |
Hi, apologies for the late reply, was AFK for the weekend. Yes, you are correct that I am trying to replay those messages in the playground. The code pretty much follows this example with the exception that the llm object (line 117) is AzureChatOpenAi from the langchain_openai library. Since we are using this browser-use library externally passing in the langchain azurechatopenai llm object, we do not have control over the messages array. The browser-use library internally passes an empty value for "content" to the langchain library, which internally just omits the field if the string is empty. Hope this helps. |
Thanks for the details @sanin17, we'll take a look. |
Hey @sanin17 - we will try to fix the parsing on the phoenix side but to us it does feel like we are missing some level of fidelity in the langchain instrumentation. If you feel like there are parts we are not capturing, please file an issue with us in the openinference package! |
I apologize I need a little more clarification here. My research shows that it is entirely valid for there to be a role without content in the messages array. Your openinference JS package even acknowledges this here and I also found this in the langchain openai library . Is that where you need clarification? |
Thanks for the follow-up @sanin17. I notice that the OpenAI API doesn't seem to allow messages without content. For example, this request:
fails with this error:
If your LangChain code manages to execute successfully, that suggests to me that LangChain is probably adding content to the message that is not being recorded by our instrumentation. |
Discussed in #7813
Originally posted by sanin17 May 30, 2025
Hi,
I am working on a project using browser-use with Azure Open AI (via langchain library in python) to do some automation. I'm using a self-hosted Phoenix to trace and improve the prompts. I can see the traces just fine, but when I click on the "Playground" button to work with the prompt, I get the error "Unable to parse span input messages, expected messages which include a role and content".
A little more info - the only way I am able to see the raw JSON is to click the "Add to Dataset" button which shows the json input. In it, I see some of the json objects in the messages array to only contain role, but no content. An example:
{ "role" : "assistant" }, { "role": "tool" }, {"role" : "user", "content": "Action Result: Scrolled to Text" }
Because some of the roles don't have a "content", it cannot parse it into the playground. My Gemini based research shows that Langchain Azure Open AI library will omit the "content" field if the string is empty, but it's still valid for there to be a role without content because the assistant may trigger a tool, like the example above. (The above example is just a portion of the messages array, there's tools array that follows as well)
Is this a known issue on your end? Somewhere the standard is not being adhered to and I'm wondering if Phoenix could be a bit more resilient here to replay a trace if this situation exists.
The text was updated successfully, but these errors were encountered: