minigpt-4 本地部署

2023-10-13 12:15
文章标签 部署 本地 minigpt

本文主要是介绍minigpt-4 本地部署,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

minigpt-4 git主页。
笔者参考了深度学习笔记–本地部署Mini-GPT4,使用了http链接,

huggingface下载llama和vicuna权重的download.txt分别如下:

http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/.gitattributes
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/LICENSE
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/README.md
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/config.json
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/generation_config.json
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00001-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00002-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00003-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00004-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00005-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00006-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00007-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00008-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00009-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00010-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00011-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00012-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00013-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00014-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00015-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00016-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00017-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00018-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00019-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00020-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00021-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00022-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00023-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00024-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00025-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00026-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00027-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00028-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00029-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00030-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00031-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00032-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00033-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model.bin.index.json
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/special_tokens_map.json
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/tokenizer.model
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/tokenizer_config.json
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/.gitattributes
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/README.md
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/config.json
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/generation_config.json
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/pytorch_model-00001-of-00002.bin
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/pytorch_model-00002-of-00002.bin
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/pytorch_model.bin.index.json
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/special_tokens_map.json
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/tokenizer.model
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/tokenizer_config.json

下载权重的脚本如下,使用了wget ${file} --no-check-certificate 绕开https检查:

#! /bin/bash
while read file; dowget ${file} --no-check-certificate 
done < download.txt

笔者的环境下,安装FastChat 0.1.10会导致依赖冲突:

The conflict is caused by:fschat 0.1.10 depends on tokenizers>=0.12.1transformers 4.35.0.dev0 depends on tokenizers<0.15 and >=0.14To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

因此改为安装别的版本fastchat

pip install fschat

再补充安装accelerate

pip install accelerate

之后执行python demo时,缺什么依赖就直接pip install安装即可。作者没有列出requirements.txt

python demo.py --cfg-path eval_configs/minigpt4_eval.yaml  --gpu-id 0

其它修改

执行python demo.py仍然会遇到其它错误:

如果在下载bert tokenizer和bert config时报HTTPS错误,可以到huggingface把文件下载下来,再修改Minigpt-4项目的源码models/blip2.py

tokenizer = BertTokenizer.from_pretrained("/root/minigpt4/bert-base-uncased", local_files_only=True)
...
encoder_config = BertConfig.from_pretrained("/root/minigpt4/bert-base-uncased", local_files_only=True)       

让代码监听网络,而不只是127.0.0.1

demo.launch(share=True, enable_queue=True, server_name='0.0.0.0')

这篇关于minigpt-4 本地部署的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/203157

相关文章

Nginx分布式部署流程分析

《Nginx分布式部署流程分析》文章介绍Nginx在分布式部署中的反向代理和负载均衡作用,用于分发请求、减轻服务器压力及解决session共享问题,涵盖配置方法、策略及Java项目应用,并提及分布式事... 目录分布式部署NginxJava中的代理代理分为正向代理和反向代理正向代理反向代理Nginx应用场景

Nginx搭建前端本地预览环境的完整步骤教学

《Nginx搭建前端本地预览环境的完整步骤教学》这篇文章主要为大家详细介绍了Nginx搭建前端本地预览环境的完整步骤教学,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... 目录项目目录结构核心配置文件:nginx.conf脚本化操作:nginx.shnpm 脚本集成总结:对前端的意义很多

linux部署NFS和autofs自动挂载实现过程

《linux部署NFS和autofs自动挂载实现过程》文章介绍了NFS(网络文件系统)和Autofs的原理与配置,NFS通过RPC实现跨系统文件共享,需配置/etc/exports和nfs.conf,... 目录(一)NFS1. 什么是NFS2.NFS守护进程3.RPC服务4. 原理5. 部署5.1安装NF

Git打标签从本地创建到远端推送的详细流程

《Git打标签从本地创建到远端推送的详细流程》在软件开发中,Git标签(Tag)是为发布版本、标记里程碑量身定制的“快照锚点”,它能永久记录项目历史中的关键节点,然而,仅创建本地标签往往不够,如何将其... 目录一、标签的两种“形态”二、本地创建与查看1. 打附注标http://www.chinasem.cn

通过Docker容器部署Python环境的全流程

《通过Docker容器部署Python环境的全流程》在现代化开发流程中,Docker因其轻量化、环境隔离和跨平台一致性的特性,已成为部署Python应用的标准工具,本文将详细演示如何通过Docker容... 目录引言一、docker与python的协同优势二、核心步骤详解三、进阶配置技巧四、生产环境最佳实践

Nginx部署HTTP/3的实现步骤

《Nginx部署HTTP/3的实现步骤》本文介绍了在Nginx中部署HTTP/3的详细步骤,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学... 目录前提条件第一步:安装必要的依赖库第二步:获取并构建 BoringSSL第三步:获取 Nginx

redis-sentinel基础概念及部署流程

《redis-sentinel基础概念及部署流程》RedisSentinel是Redis的高可用解决方案,通过监控主从节点、自动故障转移、通知机制及配置提供,实现集群故障恢复与服务持续可用,核心组件包... 目录一. 引言二. 核心功能三. 核心组件四. 故障转移流程五. 服务部署六. sentinel部署

使用Spring Cache本地缓存示例代码

《使用SpringCache本地缓存示例代码》缓存是提高应用程序性能的重要手段,通过将频繁访问的数据存储在内存中,可以减少数据库访问次数,从而加速数据读取,:本文主要介绍使用SpringCac... 目录一、Spring Cache简介核心特点:二、基础配置1. 添加依赖2. 启用缓存3. 缓存配置方案方案

使用Java读取本地文件并转换为MultipartFile对象的方法

《使用Java读取本地文件并转换为MultipartFile对象的方法》在许多JavaWeb应用中,我们经常会遇到将本地文件上传至服务器或其他系统的需求,在这种场景下,MultipartFile对象非... 目录1. 基本需求2. 自定义 MultipartFile 类3. 实现代码4. 代码解析5. 自定

Java实现本地缓存的四种方法实现与对比

《Java实现本地缓存的四种方法实现与对比》本地缓存的优点就是速度非常快,没有网络消耗,本地缓存比如caffine,guavacache这些都是比较常用的,下面我们来看看这四种缓存的具体实现吧... 目录1、HashMap2、Guava Cache3、Caffeine4、Encache本地缓存比如 caff