第六章:智谱 AI 集成¶
简介¶
智谱 AI(Zhipu AI)是清华大学团队开发的大语言模型,特点:
- 学术背景:清华大学技术团队
- 开源领先:GLM 系列开源模型
- 性价比高:价格亲民
- 中文优化:中文能力强
模型概览(2026)¶
| 模型 | 特点 | 适用场景 |
|---|---|---|
| glm-4-plus | 最强能力 | 复杂任务 |
| glm-4-air | 平衡性能 | 通用场景 |
| glm-4-flash | 快速响应 | 简单任务 |
| glm-4v | 多模态 | 图片理解 |
获取 API Key¶
- 访问 https://open.bigmodel.cn/
- 注册并登录
- 创建 API Key
安装 SDK¶
基本使用¶
初始化客户端¶
对话补全¶
response = client.chat.completions.create(
model="glm-4-plus",
messages=[
{"role": "system", "content": "你是一个助手"},
{"role": "user", "content": "你好"}
]
)
print(response.choices[0].message.content)
流式输出¶
response = client.chat.completions.create(
model="glm-4-plus",
messages=[{"role": "user", "content": "写一首诗"}],
stream=True
)
for chunk in response:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
使用 OpenAI 兼容格式¶
from openai import OpenAI
client = OpenAI(
api_key="xxx.xxx",
base_url="https://open.bigmodel.cn/api/paas/v4"
)
response = client.chat.completions.create(
model="glm-4-plus",
messages=[{"role": "user", "content": "你好"}]
)
print(response.choices[0].message.content)
高级功能¶
函数调用¶
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "获取城市天气",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string", "description": "城市名"}
},
"required": ["city"]
}
}
}
]
response = client.chat.completions.create(
model="glm-4-plus",
messages=[{"role": "user", "content": "北京天气怎么样"}],
tools=tools
)
if response.choices[0].message.tool_calls:
tool_call = response.choices[0].message.tool_calls[0]
print(f"调用函数: {tool_call.function.name}")
print(f"参数: {tool_call.function.arguments}")
图片理解(GLM-4V)¶
response = client.chat.completions.create(
model="glm-4v",
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {"url": "https://example.com/image.jpg"}
},
{"type": "text", "text": "描述这张图片"}
]
}
]
)
print(response.choices[0].message.content)
长文本处理¶
# GLM-4 支持 128K 上下文
response = client.chat.completions.create(
model="glm-4-plus",
messages=[
{"role": "system", "content": "很长的文档内容..." * 1000},
{"role": "user", "content": "总结这个文档"}
]
)
价格对比¶
| 模型 | 输入价格 | 输出价格 |
|---|---|---|
| glm-4-plus | ¥50/百万 tokens | ¥50/百万 tokens |
| glm-4-air | ¥1/百万 tokens | ¥1/百万 tokens |
| glm-4-flash | 免费 | 免费 |
注意:GLM-4-Flash 免费使用,适合测试和低频场景!
完整示例¶
from zhipuai import ZhipuAI
class GLMClient:
def __init__(self, api_key: str):
self.client = ZhipuAI(api_key=api_key)
def chat(self, prompt: str, model: str = "glm-4-flash", stream: bool = False):
"""普通对话"""
if stream:
return self._stream_chat(prompt, model)
return self._sync_chat(prompt, model)
def _sync_chat(self, prompt: str, model: str):
response = self.client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
def _stream_chat(self, prompt: str, model: str):
response = self.client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
stream=True
)
for chunk in response:
if chunk.choices[0].delta.content:
yield chunk.choices[0].delta.content
def analyze_image(self, image_url: str, question: str):
"""图片分析"""
response = self.client.chat.completions.create(
model="glm-4v",
messages=[
{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": image_url}},
{"type": "text", "text": question}
]
}
]
)
return response.choices[0].message.content
# 使用
client = GLMClient("xxx.xxx")
# 免费模型测试
answer = client.chat("介绍一下 Python", model="glm-4-flash")
print(answer)
# 流式输出
for text in client.chat("写一首诗", stream=True):
print(text, end="", flush=True)
# 图片分析
result = client.analyze_image("https://example.com/chart.png", "分析这个图表")
print(result)
小结¶
本章学习了:
- ✅ 智谱 AI 简介
- ✅ GLM-4 系列模型使用
- ✅ OpenAI 兼容格式
- ✅ 函数调用
- ✅ 图片理解(GLM-4V)
- ✅ 免费模型 glm-4-flash
下一章¶
第七章:Kimi 集成 - 学习月之暗面 Kimi API 使用。