测试大语言模型在嵌入式设备部署的可能性——模型TinyLlama-1.1B-Chat-v1.0

本文主要是介绍测试大语言模型在嵌入式设备部署的可能性——模型TinyLlama-1.1B-Chat-v1.0,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

测试模型TinyLlama-1.1B-Chat-v1.0修改推理参数,观察参数变化与推理时间变化之间的关系。
本地环境:

处理器 Intel® Core™ i5-8400 CPU @ 2.80GHz 2.80 GHz
机带 RAM 16.0 GB (15.9 GB 可用)
集显 Intel® UHD Graphics 630
独显 NVIDIA GeForce GTX 1050

主要测试修改:

outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)

源代码来源(镜像):https://hf-mirror.com/TinyLlama/TinyLlama-1.1B-Chat-v1.0

'''
https://hf-mirror.com/TinyLlama/TinyLlama-1.1B-Chat-v1.0
测试tinyLlama 1.1B效果不错,比Qwen1.8B经过量化的都好很多
'''# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerateimport os
from datetime import datetime
import torchos.environ['TF_ENABLE_ONEDNN_OPTS'] = '0'
from transformers import pipeline'''
pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16, device_map="auto")# We use the tokenizer's chat template to format each message - see https://hf-mirror.com/docs/transformers/main/en/chat_templating
messages = [{"role": "system","content": "You are a friendly chatbot who always responds in the style of a pirate",},# {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},{"role": "user", "content": "你叫什么名字?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
'''# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# ...
def load_pipeline():pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16,device_map="auto")return pipedef generate_text(content, length=20):"""根据给定的prompt生成文本"""messages = [{"role": "提示","content": "这是个友好的聊天机器人...",},# {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},{"role": "user", "content": content},]prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)datetime1 = datetime.now()outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)print(outputs[0]["generated_text"])datetime2 = datetime.now()time12_interval = datetime2 - datetime1print("时间间隔", time12_interval)if False:outputs = pipe(prompt, max_new_tokens=32, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)print(outputs[0]["generated_text"])datetime3 = datetime.now()time23_interval = datetime3 - datetime2print("时间间隔2", time23_interval)outputs = pipe(prompt, max_new_tokens=32, do_sample=False, top_k=50)print(outputs[0]["generated_text"])datetime4 = datetime.now()time34_interval = datetime4 - datetime3print("时间间隔3", time34_interval)outputs = pipe(prompt, max_new_tokens=32, do_sample=True, temperature=0.7, top_k=30, top_p=0.95)print(outputs[0]["generated_text"])datetime5 = datetime.now()time45_interval = datetime5 - datetime4print("时间间隔4", time45_interval)outputs = pipe(prompt, max_new_tokens=32, do_sample=False, top_k=30)print(outputs[0]["generated_text"])datetime6 = datetime.now()time56_interval = datetime6 - datetime5print("时间间隔5", time56_interval)outputs = pipe(prompt, max_new_tokens=12, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)print(outputs[0]["generated_text"])datetime7 = datetime.now()time67_interval = datetime7 - datetime6print("时间间隔6", time67_interval)'''结论:修改top_p不会显著降低推理时间,并且中英文相同的问题,中文问题推理时间是英文的两倍do_sample修改成False基本不会降低推理时间只有max_new_tokens才能显著降低推理时间,但是max_new_tokens与推理时间不是呈线性关系比如max_new_tokens=256,推理时间2分钟当max_new_tokens=32的时候,推理时间才会变成约1分钟因此,不如将max_new_tokens设置大些用于获取比较完整的答案'''return outputsif __name__ == "__main__":'''main function'''global pipepipe = load_pipeline()# print('load pipe ok')while True:prompt = input("请输入一个提示(或输入'exit'退出):")if prompt.lower() == 'exit':breaktry:generated_text = generate_text(prompt)print("生成的文本:")print(generated_text[0]["generated_text"])except Exception as e:print("发生错误:", e)
请输入一个提示(或输入'exit'退出):如何开门?
<|user|>
如何开门?</s>
<|assistant|>
Certainly! Opening a door is a simple process that involves several steps. Here are the general steps to follow to open a door:1. Turn off the lock: Turn off the lock with the key by pressing the "lock" button.2. Press the handle: Use the handle to push the door open. If the door is mechanical, you may need to turn a knob or pull the door handle to activate the door.3. Release the latch: Once the door is open, release the latch by pulling it backward.4. Slide the door: Slide the door forward by pushing it against the wall with your feet or using a push bar.5. Close the door: Once the door is open, close it by pressing the lock button or pulling the handle backward.6. Use a second key: If the lock has a second key, make sure it is properly inserted and then turn it to the correct position to unlock the door.Remember to always double-check the locks before opening a door, as some locks can be tricky to open. If you're unsure about the correct procedure for opening a door,
时间间隔 0:04:23.561065
生成的文本:
<|user|>
如何开门?</s>
<|assistant|>
Certainly! Opening a door is a simple process that involves several steps. Here are the general steps to follow to open a door:1. Turn off the lock: Turn off the lock with the key by pressing the "lock" button.2. Press the handle: Use the handle to push the door open. If the door is mechanical, you may need to turn a knob or pull the door handle to activate the door.3. Release the latch: Once the door is open, release the latch by pulling it backward.4. Slide the door: Slide the door forward by pushing it against the wall with your feet or using a push bar.5. Close the door: Once the door is open, close it by pressing the lock button or pulling the handle backward.6. Use a second key: If the lock has a second key, make sure it is properly inserted and then turn it to the correct position to unlock the door.Remember to always double-check the locks before opening a door, as some locks can be tricky to open. If you're unsure about the correct procedure for opening a door,
请输入一个提示(或输入'exit'退出):

这篇关于测试大语言模型在嵌入式设备部署的可能性——模型TinyLlama-1.1B-Chat-v1.0的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/924985

相关文章

从基础到高级详解Go语言中错误处理的实践指南

《从基础到高级详解Go语言中错误处理的实践指南》Go语言采用了一种独特而明确的错误处理哲学,与其他主流编程语言形成鲜明对比,本文将为大家详细介绍Go语言中错误处理详细方法,希望对大家有所帮助... 目录1 Go 错误处理哲学与核心机制1.1 错误接口设计1.2 错误与异常的区别2 错误创建与检查2.1 基础

Nginx分布式部署流程分析

《Nginx分布式部署流程分析》文章介绍Nginx在分布式部署中的反向代理和负载均衡作用,用于分发请求、减轻服务器压力及解决session共享问题,涵盖配置方法、策略及Java项目应用,并提及分布式事... 目录分布式部署NginxJava中的代理代理分为正向代理和反向代理正向代理反向代理Nginx应用场景

Go语言中json操作的实现

《Go语言中json操作的实现》本文主要介绍了Go语言中的json操作的实现,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习吧... 目录 一、jsOChina编程N 与 Go 类型对应关系️ 二、基本操作:编码与解码 三、结构体标签(Struc

Linux五种IO模型的使用解读

《Linux五种IO模型的使用解读》文章系统解析了Linux的五种IO模型(阻塞、非阻塞、IO复用、信号驱动、异步),重点区分同步与异步IO的本质差异,强调同步由用户发起,异步由内核触发,通过对比各模... 目录1.IO模型简介2.五种IO模型2.1 IO模型分析方法2.2 阻塞IO2.3 非阻塞IO2.4

python语言中的常用容器(集合)示例详解

《python语言中的常用容器(集合)示例详解》Python集合是一种无序且不重复的数据容器,它可以存储任意类型的对象,包括数字、字符串、元组等,下面:本文主要介绍python语言中常用容器(集合... 目录1.核心内置容器1. 列表2. 元组3. 集合4. 冻结集合5. 字典2.collections模块

linux部署NFS和autofs自动挂载实现过程

《linux部署NFS和autofs自动挂载实现过程》文章介绍了NFS(网络文件系统)和Autofs的原理与配置,NFS通过RPC实现跨系统文件共享,需配置/etc/exports和nfs.conf,... 目录(一)NFS1. 什么是NFS2.NFS守护进程3.RPC服务4. 原理5. 部署5.1安装NF

基于Go语言开发一个 IP 归属地查询接口工具

《基于Go语言开发一个IP归属地查询接口工具》在日常开发中,IP地址归属地查询是一个常见需求,本文将带大家使用Go语言快速开发一个IP归属地查询接口服务,有需要的小伙伴可以了解下... 目录功能目标技术栈项目结构核心代码(main.go)使用方法扩展功能总结在日常开发中,IP 地址归属地查询是一个常见需求:

录音功能在哪里? 电脑手机等设备打开录音功能的技巧

《录音功能在哪里?电脑手机等设备打开录音功能的技巧》很多时候我们需要使用录音功能,电脑和手机这些常用设备怎么使用录音功能呢?下面我们就来看看详细的教程... 我们在会议讨论、采访记录、课堂学习、灵感创作、法律取证、重要对话时,都可能有录音需求,便于留存关键信息。下面分享一下如何在电脑端和手机端上找到录音功能

通过Docker容器部署Python环境的全流程

《通过Docker容器部署Python环境的全流程》在现代化开发流程中,Docker因其轻量化、环境隔离和跨平台一致性的特性,已成为部署Python应用的标准工具,本文将详细演示如何通过Docker容... 目录引言一、docker与python的协同优势二、核心步骤详解三、进阶配置技巧四、生产环境最佳实践

Nginx部署HTTP/3的实现步骤

《Nginx部署HTTP/3的实现步骤》本文介绍了在Nginx中部署HTTP/3的详细步骤,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学... 目录前提条件第一步:安装必要的依赖库第二步:获取并构建 BoringSSL第三步:获取 Nginx