跳转至

第二章:OpenAI 集成

安装 SDK

pip install openai

模型概览(2026)

模型 特点 适用场景
gpt-5.4 最强能力 复杂任务、专业工作
gpt-5.4-mini 性价比高 编程、子代理
gpt-5.4-nano 最便宜 简单任务、高并发
gpt-realtime-1.5 实时语音 语音对话
gpt-image-1.5 图像生成 图片创作

基本使用

初始化客户端

from openai import OpenAI

client = OpenAI(
    api_key="sk-xxx",
    base_url="https://api.openai.com/v1"  # 可选,代理时修改
)

对话补全

response = client.chat.completions.create(
    model="gpt-5.4-mini",
    messages=[
        {"role": "system", "content": "你是一个助手"},
        {"role": "user", "content": "你好"}
    ]
)

print(response.choices[0].message.content)

流式输出

stream = client.chat.completions.create(
    model="gpt-5.4-mini",
    messages=[{"role": "user", "content": "写一首诗"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)

高级功能

函数调用(Function Calling)

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "获取城市天气",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {"type": "string", "description": "城市名"}
                },
                "required": ["city"]
            }
        }
    }
]

response = client.chat.completions.create(
    model="gpt-5.4-mini",
    messages=[{"role": "user", "content": "北京天气怎么样"}],
    tools=tools
)

# 处理函数调用
if response.choices[0].message.tool_calls:
    tool_call = response.choices[0].message.tool_calls[0]
    print(f"调用函数: {tool_call.function.name}")
    print(f"参数: {tool_call.function.arguments}")

JSON 模式

response = client.chat.completions.create(
    model="gpt-5.4-mini",
    messages=[{"role": "user", "content": "列出三个水果,返回JSON格式"}],
    response_format={"type": "json_object"}
)

import json
data = json.loads(response.choices[0].message.content)
print(data)

图片理解

response = client.chat.completions.create(
    model="gpt-5.4",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "描述这张图片"},
                {
                    "type": "image_url",
                    "image_url": {"url": "https://example.com/image.jpg"}
                }
            ]
        }
    ]
)

图片理解(Base64)

import base64

# 读取本地图片
with open("image.png", "rb") as f:
    image_data = base64.b64encode(f.read()).decode("utf-8")

response = client.chat.completions.create(
    model="gpt-5.4",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "描述这张图片"},
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"data:image/png;base64,{image_data}"
                    }
                }
            ]
        }
    ]
)

Prompt Caching(节省 75%)

GPT-5.4 支持提示缓存,重复内容可大幅降低成本:

response = client.chat.completions.create(
    model="gpt-5.4-mini",
    messages=[
        {
            "role": "system",
            "content": "很长的系统提示词..."  # 这部分会被缓存
        },
        {"role": "user", "content": "用户问题"}
    ],
    # 缓存自动生效,无需额外参数
)

缓存价格: - 输入:\(0.75/1M → 缓存命中:\)0.075/1M - 节省 90%!

Embedding 向量

response = client.embeddings.create(
    model="text-embedding-3-small",
    input="Hello world"
)

vector = response.data[0].embedding
print(f"向量维度: {len(vector)}")

错误处理

from openai import OpenAI, APIError, RateLimitError, AuthenticationError

try:
    response = client.chat.completions.create(
        model="gpt-5.4-mini",
        messages=[{"role": "user", "content": "你好"}]
    )
except AuthenticationError:
    print("API Key 无效")
except RateLimitError:
    print("请求频率超限,请稍后重试")
except APIError as e:
    print(f"API 错误: {e}")

价格对比

模型 输入 缓存输入 输出
gpt-5.4 $2.50/1M $0.25/1M $15.00/1M
gpt-5.4-mini $0.75/1M $0.075/1M $4.50/1M
gpt-5.4-nano $0.20/1M $0.02/1M $1.25/1M

完整示例

from openai import OpenAI

def chat(
    prompt: str,
    system: str = "你是一个助手",
    model: str = "gpt-5.4-mini",
    stream: bool = False
):
    client = OpenAI(api_key="sk-xxx")

    if stream:
        response = client.chat.completions.create(
            model=model,
            messages=[
                {"role": "system", "content": system},
                {"role": "user", "content": prompt}
            ],
            stream=True
        )
        for chunk in response:
            if chunk.choices[0].delta.content:
                yield chunk.choices[0].delta.content
    else:
        response = client.chat.completions.create(
            model=model,
            messages=[
                {"role": "system", "content": system},
                {"role": "user", "content": prompt}
            ]
        )
        return response.choices[0].message.content

# 使用
answer = chat("介绍一下 Python")
print(answer)

# 流式输出
for text in chat("写一首诗", stream=True):
    print(text, end="", flush=True)

小结

本章学习了:

  • ✅ OpenAI SDK 安装
  • ✅ GPT-5.4 系列模型使用
  • ✅ 流式输出
  • ✅ 函数调用
  • ✅ JSON 模式
  • ✅ 图片理解
  • ✅ Prompt Caching 节省成本
  • ✅ Embedding 向量

下一章

第三章:DeepSeek 集成 - 学习 DeepSeek V3.2 和 R1 推理模型。