You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A detailed tool for dynamic and reflective problem-solving through thoughts.
This tool helps analyze problems through a flexible thinking process that can adapt and evolve.
Each thought can build on, question, or revise previous insights as understanding deepens.
When to use this tool:
- Breaking down complex problems into steps
- Planning and design with room for revision
- Analysis that might need course correction
- Problems where the full scope might not be clear initially
- Problems that require a multi-step solution
- Tasks that need to maintain context over multiple steps
- Situations where irrelevant information needs to be filtered out
Key features:
- You can adjust total_thoughts up or down as you progress
- You can question or revise previous thoughts
- You can add more thoughts even after reaching what seemed like the end
- You can express uncertainty and explore alternative approaches
- Not every thought needs to build linearly - you can branch or backtrack
- Generates a solution hypothesis
- Verifies the hypothesis based on the Chain of Thought steps
- Repeats the process until satisfied
- Provides a correct answer
Parameters explained:
- thought: Your current thinking step, which can include:
* Regular analytical steps
* Revisions of previous thoughts
* Questions about previous decisions
* Realizations about needing more analysis
* Changes in approach
* Hypothesis generation
* Hypothesis verification
- next_thought_needed: True if you need more thinking, even if at what seemed like the end
- thought_number: Current number in sequence (can go beyond initial total if needed)
- total_thoughts: Current estimate of thoughts needed (can be adjusted up/down)
- is_revision: A boolean indicating if this thought revises previous thinking
- revises_thought: If is_revision is true, which thought number is being reconsidered
- branch_from_thought: If branching, which thought number is the branching point
- branch_id: Identifier for the current branch (if any)
- needs_more_thoughts: If reaching end but realizing more thoughts needed
You should:
1. Start with an initial estimate of needed thoughts, but be ready to adjust
2. Feel free to question or revise previous thoughts
3. Don't hesitate to add more thoughts if needed, even at the "end"
4. Express uncertainty when present
5. Mark thoughts that revise previous thinking or branch into new paths
6. Ignore information that is irrelevant to the current step
7. Generate a solution hypothesis when appropriate
8. Verify the hypothesis based on the Chain of Thought steps
9. Repeat the process until satisfied with the solution
10. Provide a single, ideally correct answer as the final output
11. Only set next_thought_needed to false when truly done and a satisfactory answer is reached
The separation of the session establishment and messaging endpoints is intended to simplify Cross-Origin Resource Sharing (CORS). By > providing a 'simple' HTTP POST endpoint for message exchange, CORS preflight requests can be avoided
Uh oh!
There was an error while loading. Please reload this page.
1. 什么是MCP(Model Context Protocol)
MCP(Model Context Protocol,模型上下文协议)是由Anthropic公司于2024年推出并开源的一种通信协议,旨在解决大型语言模型(LLM)与外部数据源及工具之间的连接问题。定义了 Model 与外部接口/数据/Prompt 通信之间的协议。 工具/资源提供方只需要实现 MCP 协议就可以和实现 MCP 客户端的 LLM APP 链接,LLM APP在运行过程中自动根据协议中返回工具列表/Prompt/资源列表,通过JsonRpc 从 MCP 服务器中获取。
2. MCP定义了什么:
MCP 定义的原语:
2.1 工具:
经常有人拿 FunctionCall 和 MCP 做对比,甚至发出既生FunctionCall 又生 MCP 的感叹,我个人认为 FunctionCall和 MCP 并不冲突,FunctionCall 其实是 MCP 的一个子集,MCP 也支持 FunctionCall,只不过 MCP 还支持 Resource/Prompt 等定义,并在协议层对其获取/调用/更新做了明确 Protocol 约束。
MCP从协议层定义了工具的获取,调用协议:
tools/list
获取当前 MCP 所有的工具列表,主要是元数据,工具的描述和需要的参数,以及输出结果的 schema。tools/call
执行调用工具的动作,并获取结果。notifications/tools/list_changed
长链接中通过 Push 的方式更新 Client 端缓存的 Tools 信息。MCP Server -> Client在 Agent 中使用他的方式一般和传统的 FunctionCall 方式一致,链接 MCP 之后获取到所有 Tools 的 Meta 之后直接将其渲染到 SystemPrompt 上即可。
2.2 资源
Resource 在 MCP 中是一种应用程序控制的原语,允许服务器向客户端公开可被读取的数据和内容,这些内容可用作 LLM(大型语言模型)交互的上下文。资源类似于 Restful 接口中对资源的定义,其可以是文件/数据库记录/API 响应/日志文件。
MCP 要求每个资源实体都要有一个唯一的 URI,格式是标准的 URL 格式
protocol://host/path
,处理有 URI,例如如果要讲postgres 中某个表以资源的形式对外开放,他的 URI 为postgres://<host>:5432/<schema>/<database>/<table>
。在 MCP中资源的元数据定义为:
资源其实是整体的获取的,这也是他与 FunctionCall 最不同的地方,比如说在数据库这个场景,如果我实现一个 QueryTools 其实也可以实现类似于资源的效果,但是资源更强调一次性将资源所有信息返回,而 Tools 强调做了一些动作获得的结果,动作可能是 read 也可能是write。
2.3 Prompt
不同的 MCP Server 提供的特有的 Prompt 模板,只要是针对当前 MCP 提供的功能配套的 Prompt,一般会可以 Prompt 快速的让 LLM 更好的调用 MCP Server 中工具的能力。
例如一个重构代码的 Agent:
2.4 Sampling
Sampling 是 MCP 中允许服务器向客户端请求 LLM 完成的功能。这与传统的交互模式形成鲜明对比:通常是客户端向服务器请求数据或功能,但在 Sampling 中,服务器可以主动请求客户端调用 LLM 来生成文本或进行推理。
简单来说,Sampling 允许 MCP 服务器"反向"使用客户端连接的语言模型,从而实现更复杂的 AI 代理行为,同时保持安全性和隐私控制.
Sampling 的工作流程遵循以下步骤 :
sampling/createMessage
请求给客户端服务器在请求 Sampling 时可以提供多种参数来微调 LLM 的行为 :
服务器还可以通过 modelPreferences 对象来指定模型选择偏好,以及通过 systemPrompt 字段请求特定的系统提示,但客户端最终决定使用什么模型以及是否采纳系统提示 。
Sampling 特别适用于需要"代理行为"(agentic behavior)的场景,即服务器需要 LLM 的帮助来完成任务。典型用例包括:
一个 Case
3. 一个现代的 Mcp 示例
3.1 Sequential Thinking MCP + 使用 MCP Function 做状态干预
Sequential Thinking MCP 是一个标准的 MCP 示例,他通过 function 的方式引导LLM 去分步思考并得出结论。他的 Tool Prompt 模板为:
以下以经典弱智吧问题"不能直接喝,苹果不能直接吃,为什么让不能直接喝的水洗洗苹果就能吃了?"作为提问展示整个引导过程。
4. Transport 层
Agent 系统通过 JsonRpc 与 McpClient 进行通信:
两种模式:
为什么要使用 HTTP-SSE 这么奇怪的 事件 Endpoint 和消息 Endpoint 分离的方式:
HTTP-SSE Client Python实现:
一个简单 python 实现
参考
The text was updated successfully, but these errors were encountered: