通过遥测工具抓取了一下 Gemini CLI 的对话消息记录,并做了一下简单的分析,感觉有所启发,记录一下。 对话内容是让 Gemini CLI 解析一下自己的代码,下面就来逐条消息进行一下分析。
第一条用户 prompt 是由 Gemini CLI 发出的,内容如下
{
"role": "user",
"parts": [
{
"text": "This is the Gemini CLI. We are setting up the context for our chat.\n Today's date is 2025 年 8 月 1 日星期五.\n My operating system is: darwin\n I'm currently working in the directory: /Users/xxx/mycode/gemini-cli\n Showing up to 200 items (files + folders). Folders or files indicated with ... contain more items not shown, were ignored, or the display limit (200 items) was reached.\n\n/Users/xxx/mycode/gemini-cli/\n├───.editorconfig\n├───.gitattributes\n├───.gitignore\n├───.npmrc\n├───.nvmrc\n├───.prettierrc.json\n├───CONTRIBUTING.md\n├───Dockerfile\n├───esbuild.config.js\n├───eslint.config.js\n├───GEMINI.md\n├───LICENSE\n├───Makefile\n├───package-lock.json\n├───package.json\n├───pnpm-lock.yaml\n├───README.md\n├───ROADMAP.md\n├───tsconfig.json\n├───.gcp/\n│ ├───Dockerfile.gemini-code-builder\n│ └───release-docker.yaml\n├───.gemini/\n├───.git/...\n├───.github/\n│ ├───CODEOWNERS\n│ ├───pull_request_template.md\n│ ├───actions/\n
...省略
"
}
]
}
从记录中可以清楚看到第一条消息提供了几个比较重要的信息
This is the Gemini CLI
We are setting up the context for our chat.
Today's date is 2025 年 8 月 1 日星期五.\n My operating system is: darwin
模型接收到这些信息后就有了第一条返回
{
"role": "model",
"parts": [{ "text": "Got it. Thanks for the context!" }]
}
很礼貌的回复了一下 "Got it. Thanks for the context!"
在对话中,由用户输入的 prompt 其实只有一句:
{
"role": "user",
"parts": [
{
"text": "帮我分析一下 @packages/core/ 中的代码,详细分析其技术架构、工作原理,并画出流程图"
}
]
}
在消息记录中模型的回复内容为:
{
"role": "model",
"parts": [
{
"text": "好的,我们来深入分析一下 `@packages/core` 的代码。\n\n 这个包是 Gemini CLI 的核心,负责处理与 Google AI 模型的交互、工具执行、状态管理和配置等关键任务。可以把它看作是连接用户输入(来自 `packages/cli`)和模型能力( Google AI )的“大脑”。\n\n 首先,我需要浏览一下 `@packages/core` 的目录结构来更好地理解其内部模块。\n\n"
}
]
},
{
"role": "model",
"parts": [
{
"functionCall": {
"name": "list_directory",
"args": {
"path": "/Users/xxx/mycode/gemini-cli/packages/core/src/"
}
}
}
]
}
这一段其实只包含了模型思考后的正式回复,完整的信息也抓取到了,如下,可以看到其中模型的思考过程,消息历史记录中思考的部分都被省略了
{
"type": "response",
"content": [
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"thought": true,
"text": "**Initiating Code Analysis**\n\n\\n\\n\n\nI'm focusing on breaking down the user's request. My first step is identifying the core task, which is to analyze the `@packages/core/` directory. Then, I will address the required outputs, like detailed code summaries and potential vulnerabilities. I'm now considering how best to structure these outputs for clarity and impact.\n\n\n"
}
]
}
}
],
"usageMetadata": { "trafficType": "PROVISIONED_THROUGHPUT" }
},
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"thought": true,
"text": "**Refining Output Structures**\n\n\\n\\n\n\nI'm working on the structure of the technical architecture analysis. I'm also structuring the explanation of working principles, ensuring clarity and readability in Chinese. The flowchart generation is the next step; I'm considering tools and approaches to best visualize the code's logic.\n\n\n"
}
]
}
}
],
"usageMetadata": { "trafficType": "PROVISIONED_THROUGHPUT" }
},
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"thought": true,
"text": "**Examining Module Interactions**\n\n\\n\\n\n\nI've just finished analyzing `index.ts` to understand the module exports. The next step is examining the interactions between modules, especially how `core`, `mcp`, and `config` are connected. I'll focus on function calls and dependencies to build a clear map of data flow. This will help me understand the technical architecture better, which is what I'm targeting next.\n\n\n"
}
]
}
}
],
"usageMetadata": { "trafficType": "PROVISIONED_THROUGHPUT" }
},
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"thought": true,
"text": "**Defining Core Package Modules**\n\n\\n\\n\n\nI've moved on to analyzing the `index.ts` files within key subdirectories, like `core`, `mcp`, and `tools`. I'm identifying what each module exports to understand its role in the larger system. Now I'm beginning to generate initial descriptions for each module, and I'm also thinking about the interactions and data flow.\n\n\n"
}
]
}
}
],
"usageMetadata": { "trafficType": "PROVISIONED_THROUGHPUT" }
},
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"thought": true,
"text": "**Examining Core Module Structure**\n\n\\n\\n\n\nI'm initiating the directory listing to obtain the complete file structure. After that, I'll analyze the `index.ts` file to understand the module's public API. I will then examine the key subdirectories. This approach helps me grasp the core functions and the relationships among the modules within the package.\n\n\n"
}
]
}
}
],
"usageMetadata": { "trafficType": "PROVISIONED_THROUGHPUT" }
},
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"thought": true,
"text": "**Summarizing Module Roles**\n\n\\n\\n\n\nI'm currently reading the `index.ts` files within the `core`, `mcp`, `tools`, and `config` directories. I'm focusing on the function exports to determine each module's function within the larger context. The key objective is to understand their roles in the system. I am also starting to think about the data flow among them.\n\n\n"
}
]
}
}
],
"usageMetadata": { "trafficType": "PROVISIONED_THROUGHPUT" }
},
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"thought": true,
"text": "**Defining Initial Hypotheses**\n\n\\n\\n\n\nI'm consolidating my initial findings. I now understand the directory structure and the basic responsibilities of the modules based on the `index.ts` exports. My next step is to confirm the roles of each module, specifically `core`, `mcp`, and `tools`. Then I will start synthesizing the response in Chinese and begin generating a text-based flowchart.\n\n\n"
}
]
}
}
],
"usageMetadata": { "trafficType": "PROVISIONED_THROUGHPUT" }
},
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"thought": true,
"text": "**Defining Execution Steps**\n\n\\n\\n\n\nI'm planning the execution phase, including the order of file reads and the analysis process for each. I will start by listing the directory's content, then proceed with analyzing key files like `index.ts` in core, mcp, and tools. The goal is to extract module exports and understand interdependencies. I'm structuring my actions using the identified tool usage.\n\n\n"
}
]
}
}
],
"usageMetadata": { "trafficType": "PROVISIONED_THROUGHPUT" }
},
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"thought": true,
"text": "**Structuring Architectural Overview**\n\n\\n\\n\n\nI'm concentrating on a response structure for the detailed architectural overview. I'm dividing the technical architecture section by module (core, mcp, tools, config, etc.) for clarity. Also, I'm planning the end-to-end flow description. I will then use a numbered list to guide the user through the steps in a typical interaction. After this, I will build the text-based Mermaid flowchart for visualization.\n\n\n"
}
]
}
}
],
"usageMetadata": { "trafficType": "PROVISIONED_THROUGHPUT" }
},
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"thought": true,
"text": "**Defining Core Modules**\n\n\\n\\n\n\nI'm focused on using the directory listing and examining `index.ts` files to define key modules. My goal is to understand the responsibilities of `core`, `mcp`, `tools`, and `config`. I am also planning to include a review of `services`, `telemetry`, and `utils`. The next phase is synthesizing my understanding of the data flow between the modules.\n\n\n"
}
]
}
}
],
"usageMetadata": { "trafficType": "PROVISIONED_THROUGHPUT" }
},
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"text": "好的,我们来深入分析一下 `@packages/core` 的代码。\n\n 这个包是 Gemini CLI 的核心,"
}
]
}
}
],
"usageMetadata": { "trafficType": "PROVISIONED_THROUGHPUT" }
},
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"text": "负责处理与 Google AI 模型的交互、工具执行、状态管理和配置等关键任务。可以把它看作是连接用户输入(来自 `packages/cli`)和模型能力( Google AI )的“大脑”。\n\n 首先,我需要浏览一下 `@packages/core` 的目录结构来更好地"
}
]
}
}
],
"usageMetadata": { "trafficType": "PROVISIONED_THROUGHPUT" }
},
{
"candidates": [
{
"content": {
"role": "model",
"parts": [{ "text": "理解其内部模块。\n\n" }]
}
}
],
"usageMetadata": { "trafficType": "PROVISIONED_THROUGHPUT" }
},
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"functionCall": {
"name": "list_directory",
"args": {
"path": "/Users/xupeng/mycode/gemini-cli/packages/core/src/"
}
}
}
]
},
"finishReason": "STOP"
}
],
"usageMetadata": {
"promptTokenCount": 12968,
"candidatesTokenCount": 120,
"totalTokenCount": 15550,
"trafficType": "PROVISIONED_THROUGHPUT",
"promptTokensDetails": [{ "modality": "TEXT", "tokenCount": 12968 }],
"candidatesTokensDetails": [
{ "modality": "TEXT", "tokenCount": 120 }
],
"thoughtsTokenCount": 2462
}
}
]
}
可以看到在上一段的模型回复中模型发回了一段特殊的回复,parts 中只包含了 functionCall 节点,这其实就是模型调用对应工具的的方式,这个回复中包含了需要调用的工具名称以及参数信息。 Gemini CLI 在收到工具调用信息后会根据相关信息执行工具调用,然后将工具调用的结果再次返回给模型,具体的消息如下:
{
"role": "user",
"parts": [
{
"functionResponse": {
"id": "list_directory-1754016459561-12bef4b2f24e9",
"name": "list_directory",
"response": {
"output": "Directory listing for /Users/xxx/mycode/gemini-cli/packages/core/src/:\n[DIR] __mocks__\n[DIR] code_assist\n[DIR] config\n[DIR] core\n[DIR] mcp\n[DIR] services\n[DIR] telemetry\n[DIR] tools\n[DIR] utils\nindex.test.ts\nindex.ts"
}
}
}
]
}
通过分析交互历史数据,我们可以看到 Gemini CLI 与用户之间的交互遵循一个清晰的流程: