minigpt-4 本地部署

2023-10-13 12:15
文章标签 部署 本地 minigpt

本文主要是介绍minigpt-4 本地部署,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

minigpt-4 git主页。
笔者参考了深度学习笔记–本地部署Mini-GPT4,使用了http链接,

huggingface下载llama和vicuna权重的download.txt分别如下:

http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/.gitattributes
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/LICENSE
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/README.md
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/config.json
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/generation_config.json
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00001-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00002-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00003-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00004-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00005-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00006-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00007-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00008-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00009-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00010-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00011-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00012-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00013-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00014-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00015-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00016-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00017-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00018-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00019-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00020-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00021-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00022-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00023-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00024-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00025-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00026-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00027-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00028-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00029-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00030-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00031-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00032-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00033-of-00033.bin
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model.bin.index.json
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/special_tokens_map.json
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/tokenizer.model
http://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/tokenizer_config.json
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/.gitattributes
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/README.md
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/config.json
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/generation_config.json
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/pytorch_model-00001-of-00002.bin
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/pytorch_model-00002-of-00002.bin
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/pytorch_model.bin.index.json
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/special_tokens_map.json
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/tokenizer.model
http://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/tokenizer_config.json

下载权重的脚本如下,使用了wget ${file} --no-check-certificate 绕开https检查:

#! /bin/bash
while read file; dowget ${file} --no-check-certificate 
done < download.txt

笔者的环境下,安装FastChat 0.1.10会导致依赖冲突:

The conflict is caused by:fschat 0.1.10 depends on tokenizers>=0.12.1transformers 4.35.0.dev0 depends on tokenizers<0.15 and >=0.14To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

因此改为安装别的版本fastchat

pip install fschat

再补充安装accelerate

pip install accelerate

之后执行python demo时,缺什么依赖就直接pip install安装即可。作者没有列出requirements.txt

python demo.py --cfg-path eval_configs/minigpt4_eval.yaml  --gpu-id 0

其它修改

执行python demo.py仍然会遇到其它错误:

如果在下载bert tokenizer和bert config时报HTTPS错误,可以到huggingface把文件下载下来,再修改Minigpt-4项目的源码models/blip2.py

tokenizer = BertTokenizer.from_pretrained("/root/minigpt4/bert-base-uncased", local_files_only=True)
...
encoder_config = BertConfig.from_pretrained("/root/minigpt4/bert-base-uncased", local_files_only=True)       

让代码监听网络,而不只是127.0.0.1

demo.launch(share=True, enable_queue=True, server_name='0.0.0.0')

这篇关于minigpt-4 本地部署的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/203157

相关文章

redis-sentinel基础概念及部署流程

《redis-sentinel基础概念及部署流程》RedisSentinel是Redis的高可用解决方案,通过监控主从节点、自动故障转移、通知机制及配置提供,实现集群故障恢复与服务持续可用,核心组件包... 目录一. 引言二. 核心功能三. 核心组件四. 故障转移流程五. 服务部署六. sentinel部署

使用Spring Cache本地缓存示例代码

《使用SpringCache本地缓存示例代码》缓存是提高应用程序性能的重要手段,通过将频繁访问的数据存储在内存中,可以减少数据库访问次数,从而加速数据读取,:本文主要介绍使用SpringCac... 目录一、Spring Cache简介核心特点:二、基础配置1. 添加依赖2. 启用缓存3. 缓存配置方案方案

使用Java读取本地文件并转换为MultipartFile对象的方法

《使用Java读取本地文件并转换为MultipartFile对象的方法》在许多JavaWeb应用中,我们经常会遇到将本地文件上传至服务器或其他系统的需求,在这种场景下,MultipartFile对象非... 目录1. 基本需求2. 自定义 MultipartFile 类3. 实现代码4. 代码解析5. 自定

Java实现本地缓存的四种方法实现与对比

《Java实现本地缓存的四种方法实现与对比》本地缓存的优点就是速度非常快,没有网络消耗,本地缓存比如caffine,guavacache这些都是比较常用的,下面我们来看看这四种缓存的具体实现吧... 目录1、HashMap2、Guava Cache3、Caffeine4、Encache本地缓存比如 caff

Linux部署中的文件大小写问题的解决方案

《Linux部署中的文件大小写问题的解决方案》在本地开发环境(Windows/macOS)一切正常,但部署到Linux服务器后出现模块加载错误,核心原因是Linux文件系统严格区分大小写,所以本文给大... 目录问题背景解决方案配置要求问题背景在本地开发环境(Windows/MACOS)一切正常,但部署到

使用IDEA部署Docker应用指南分享

《使用IDEA部署Docker应用指南分享》本文介绍了使用IDEA部署Docker应用的四步流程:创建Dockerfile、配置IDEADocker连接、设置运行调试环境、构建运行镜像,并强调需准备本... 目录一、创建 dockerfile 配置文件二、配置 IDEA 的 Docker 连接三、配置 Do

MySQL 主从复制部署及验证(示例详解)

《MySQL主从复制部署及验证(示例详解)》本文介绍MySQL主从复制部署步骤及学校管理数据库创建脚本,包含表结构设计、示例数据插入和查询语句,用于验证主从同步功能,感兴趣的朋友一起看看吧... 目录mysql 主从复制部署指南部署步骤1.环境准备2. 主服务器配置3. 创建复制用户4. 获取主服务器状态5

golang程序打包成脚本部署到Linux系统方式

《golang程序打包成脚本部署到Linux系统方式》Golang程序通过本地编译(设置GOOS为linux生成无后缀二进制文件),上传至Linux服务器后赋权执行,使用nohup命令实现后台运行,完... 目录本地编译golang程序上传Golang二进制文件到linux服务器总结本地编译Golang程序

如何在Ubuntu 24.04上部署Zabbix 7.0对服务器进行监控

《如何在Ubuntu24.04上部署Zabbix7.0对服务器进行监控》在Ubuntu24.04上部署Zabbix7.0监控阿里云ECS服务器,需配置MariaDB数据库、开放10050/1005... 目录软硬件信息部署步骤步骤 1:安装并配置mariadb步骤 2:安装Zabbix 7.0 Server

一文详解Git中分支本地和远程删除的方法

《一文详解Git中分支本地和远程删除的方法》在使用Git进行版本控制的过程中,我们会创建多个分支来进行不同功能的开发,这就容易涉及到如何正确地删除本地分支和远程分支,下面我们就来看看相关的实现方法吧... 目录技术背景实现步骤删除本地分支删除远程www.chinasem.cn分支同步删除信息到其他机器示例步骤