首先我们需要先基于pip 安装
pip install openai
DeepSeek Api 官网链接 https://api-docs.deepseek.com/zh-cn/。这个页面提供了如何用这块的API

在使用前,参考文档链接申请api的key,比如我申请的key,默认一开始送你10块钱的,够用比较久。

最开始我们先熟悉如何使用openai的接口规范,基于deepseek来实现的基础问答。代码接口如下:
from openai import OpenAI
client = OpenAI(api_key=api_key, base_url="https://api.deepseek.com")
defget_completion(prompt, model="deepseek-chat"):
# messages = [{"role": "user", "content": prompt}]
response = client.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": prompt},
],
stream=False
)
return response
resp = get_completion("What is 1+1?")
print(resp)
print(resp.choices[0].message.content)
我们这类 1+ 1 等于几,大模型回答如下:

往往为了复用某些功能,就需要我们针对某一类问题设计模版,能够基于不同的问题,替换不同的具体问题,如何来使用模版功能,如下所示这里我们需要转换文本,使用一种新的表达style 基于 llm改造文本内容:
# 模版开发
customer_email = """
Arrr, I be fuming that me blender lid \
flew off and splattered me kitchen walls \
with smoothie! And to make matters worse,\
the warranty don't cover the cost of \
cleaning up me kitchen. I need yer help \
right now, matey!
"""
style = """American English \
in a calm and respectful tone
"""
prompt = f"""Translate the text \
that is delimited by triple backticks
into a style that is {style}.
text: ```{customer_email}```
"""
response = get_completion(prompt)
print(response)
print('------------')
print(response.choices[0].message.content)

首先我们需要先基于pip 安装
pip install langchain_openai langchain
我们实现上述类似逻辑,通过llm 基于同一段文本进行改造转换, 实现如下:
from langchain_openai import ChatOpenAI
chat = ChatOpenAI(
model='deepseek-chat',
openai_api_key=api_key,
openai_api_base='https://api.deepseek.com',
max_tokens=1024
)
template_string = """Translate the text \
that is delimited by triple backticks \
into a style that is {style}. \
text: ```{text}```
"""
from langchain.prompts import ChatPromptTemplate
prompt_template = ChatPromptTemplate.from_template(template_string)
customer_style = """American English \
in a calm and respectful tone
"""
customer_email = """
Arrr, I be fuming that me blender lid \
flew off and splattered me kitchen walls \
with smoothie! And to make matters worse, \
the warranty don't cover the cost of \
cleaning up me kitchen. I need yer help \
right now, matey!
"""
customer_messages = prompt_template.format_messages(
style=customer_style,
text=customer_email)
# Call the LLM to translate to the style of the customer message
# Reference: chat = ChatOpenAI(temperature=0.0)
customer_response = chat.invoke(customer_messages, temperature=0)
print(customer_response.content)
service_reply = """Hey there customer, \
the warranty does not cover \
cleaning expenses for your kitchen \
because it's your fault that \
you misused your blender \
by forgetting to put the lid on before \
starting the blender. \
Tough luck! See ya!
"""
service_style_pirate = """\
a polite tone \
that speaks in English Pirate\
"""
service_messages = prompt_template.format_messages(
style=service_style_pirate,
text=service_reply)
service_response = chat.invoke(service_messages, temperature=0)
print(service_response.content)

如何将llm返回的信息按照特定的结构返回信息,比如返回json数据格式。 我们还是按照上面的例子来进行改造: 首先我们返回的数据结构长什么样子:

因此需要设计输出的schema要求:
gift_schema = ResponseSchema(name="gift",
description="Was the item purchased\
as a gift for someone else? \
Answer True if yes,\
False if not or unknown.")
delivery_days_schema = ResponseSchema(name="delivery_days",
description="How many days\
did it take for the product\
to arrive? If this \
information is not found,\
output -1.")
response_schemas = [gift_schema,
delivery_days_schema
]
我们定义了返回的数据结构,gift True or False, delivery_days 返回时间 默认值-1.
from langchain_openai import ChatOpenAI
from langchain.output_parsers import ResponseSchema
from langchain.output_parsers import StructuredOutputParser
from langchain.prompts import ChatPromptTemplate
chat = ChatOpenAI(
model='deepseek-chat',
openai_api_key=api_key,
openai_api_base='https://api.deepseek.com',
max_tokens=1024
)
gift_schema = ResponseSchema(name="gift",
description="Was the item purchased\
as a gift for someone else? \
Answer True if yes,\
False if not or unknown.")
delivery_days_schema = ResponseSchema(name="delivery_days",
description="How many days\
did it take for the product\
to arrive? If this \
information is not found,\
output -1.")
response_schemas = [gift_schema,
delivery_days_schema
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
print(output_parser)
format_instructions = output_parser.get_format_instructions()
print(format_instructions)
customer_review = """\
This leaf blower is pretty amazing. It has four settings:\
candle blower, gentle breeze, windy city, and tornado. \
It arrived in two days, just in time for my wife's \
anniversary present. \
I think my wife liked it so much she was speechless. \
So far I've been the only one using it, and I've been \
using it every other morning to clear the leaves on our lawn. \
It's slightly more expensive than the other leaf blowers \
out there, but I think it's worth it for the extra features.
"""
review_template = """\
For the following text, extract the following information:
gift: Was the item purchased as a gift for someone else? \
Answer True if yes, False if not or unknown.
delivery_days: How many days did it take for the product \
to arrive? If this information is not found, output -1.
Format the output as JSON with the following keys:
gift
delivery_days
text: {text}
"""
prompt = ChatPromptTemplate.from_template(template=review_template)
messages = prompt.format_messages(text=customer_review,
format_instructions=format_instructions)
response = chat.invoke(messages, temperature=0)
output_dict = output_parser.parse(response.content)
print(output_dict)

DeepSeek无疑是2025开年AI圈的一匹黑马,在一众AI大模型中,DeepSeek以低价高性能的优势脱颖而出。DeepSeek的上线实现了AI界的又一大突破,各大科技巨头都火速出手,争先抢占DeepSeek大模型的流量风口。
DeepSeek的爆火,远不止于此。它是一场属于每个人的科技革命,一次打破界限的机会,一次让普通人也能逆袭契机。
DeepSeek的优点

掌握DeepSeek对于转行大模型领域的人来说是一个很大的优势,目前懂得大模型技术方面的人才很稀缺,而DeepSeek就是一个突破口。现在越来越多的人才都想往大模型方向转行,对于想要转行创业,提升自我的人来说是一个不可多得的机会。

