Spring Boot实现大文件分块上传

2024-09-05 23:28

本文主要是介绍Spring Boot实现大文件分块上传,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

1.分块上传使用场景

  • 大文件加速上传:当文件大小超过100MB时,使用分片上传可实现并行上传多个Part以加快上传速度。

  • 网络环境较差:网络环境较差时,建议使用分片上传。当出现上传失败的时候,您仅需重传失败的Part。

  • 文件大小不确定: 可以在需要上传的文件大小还不确定的情况下开始上传,这种场景在视频监控等行业应用中比较常见。

2.实现原理

实现原理其实很简单,核心就是客户端把大文件按照一定规则进行拆分,比如20MB为一个小块,分解成一个一个的文件块,然后把这些文件块单独上传到服务端,等到所有的文件块都上传完毕之后,客户端再通知服务端进行文件合并的操作,合并完成之后整个任务结束。

3.代码工程

 实验目的

实现大文件分块上传

pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><parent><artifactId>springboot-demo</artifactId><groupId>com.et</groupId><version>1.0-SNAPSHOT</version></parent><modelVersion>4.0.0</modelVersion><artifactId>file</artifactId><properties><maven.compiler.source>8</maven.compiler.source><maven.compiler.target>8</maven.compiler.target></properties><dependencies><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId></dependency><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-autoconfigure</artifactId></dependency><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-test</artifactId><scope>test</scope></dependency><dependency><groupId>org.apache.httpcomponents</groupId><artifactId>httpclient</artifactId></dependency><dependency><groupId>org.apache.httpcomponents</groupId><artifactId>httpmime</artifactId></dependency><dependency><groupId>org.projectlombok</groupId><artifactId>lombok</artifactId></dependency><dependency><groupId>cn.hutool</groupId><artifactId>hutool-core</artifactId><version>5.8.15</version></dependency></dependencies>
</project>

controller

package com.et.controller;import com.et.bean.Chunk;
import com.et.bean.FileInfo;
import com.et.service.ChunkService;import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.core.io.Resource;
import org.springframework.http.HttpHeaders;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;import java.util.List;@RestController
@RequestMapping("file")
public class ChunkController {@Autowiredprivate ChunkService chunkService;/*** upload by part** @param chunk* @return*/@PostMapping(value = "chunk")public ResponseEntity<String> chunk(Chunk chunk) {chunkService.chunk(chunk);return ResponseEntity.ok("File Chunk Upload Success");}/*** merge** @param filename* @return*/@GetMapping(value = "merge")public ResponseEntity<Void> merge(@RequestParam("filename") String filename) {chunkService.merge(filename);return ResponseEntity.ok().build();}/*** get fileName** @return files*/@GetMapping("/files")public ResponseEntity<List<FileInfo>> list() {return ResponseEntity.ok(chunkService.list());}/*** get single file** @param filename* @return file*/@GetMapping("/files/{filename:.+}")public ResponseEntity<Resource> getFile(@PathVariable("filename") String filename) {return ResponseEntity.ok().header(HttpHeaders.CONTENT_DISPOSITION,"attachment; filename=\"" + filename + "\"").body(chunkService.getFile(filename));}
}

config

package com.et.config;import com.et.service.FileClient;
import com.et.service.impl.LocalFileSystemClient;import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;import java.util.HashMap;
import java.util.Map;
import java.util.function.Supplier;@Configuration
public class FileClientConfig {@Value("${file.client.type:local-file}")private String fileClientType;private static final Map<String, Supplier<FileClient>> FILE_CLIENT_SUPPLY = new HashMap<String, Supplier<FileClient>>() {{put("local-file", LocalFileSystemClient::new);// put("aws-s3", AWSFileClient::new);}};/*** get client** @return */@Beanpublic FileClient fileClient() {return FILE_CLIENT_SUPPLY.get(fileClientType).get();}
}

service

package com.et.service;import com.et.bean.Chunk;
import com.et.bean.FileInfo;import org.springframework.core.io.Resource;import java.util.List;public interface ChunkService {void chunk(Chunk chunk);void merge(String filename);List<FileInfo> list();Resource getFile(String filename);
}
package com.et.service.impl;import com.et.bean.Chunk;
import com.et.bean.ChunkProcess;
import com.et.bean.FileInfo;
import com.et.service.ChunkService;
import com.et.service.FileClient;import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.core.io.Resource;
import org.springframework.stereotype.Service;import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.concurrent.atomic.AtomicBoolean;@Service
@Slf4j
public class ChunkServiceImpl implements ChunkService {// processprivate static final Map<String, ChunkProcess> CHUNK_PROCESS_STORAGE = new ConcurrentHashMap<>();// file listprivate static final List<FileInfo> FILE_STORAGE = new CopyOnWriteArrayList<>();@Autowiredprivate FileClient fileClient;@Overridepublic void chunk(Chunk chunk) {String filename = chunk.getFilename();boolean match = FILE_STORAGE.stream().anyMatch(fileInfo -> fileInfo.getFileName().equals(filename));if (match) {throw new RuntimeException("File [ " + filename + " ] already exist");}ChunkProcess chunkProcess;String uploadId;if (CHUNK_PROCESS_STORAGE.containsKey(filename)) {chunkProcess = CHUNK_PROCESS_STORAGE.get(filename);uploadId = chunkProcess.getUploadId();AtomicBoolean isUploaded = new AtomicBoolean(false);Optional.ofNullable(chunkProcess.getChunkList()).ifPresent(chunkPartList ->isUploaded.set(chunkPartList.stream().anyMatch(chunkPart -> chunkPart.getChunkNumber() == chunk.getChunkNumber())));if (isUploaded.get()) {log.info("file【{}】chunk【{}】upload,jump", chunk.getFilename(), chunk.getChunkNumber());return;}} else {uploadId = fileClient.initTask(filename);chunkProcess = new ChunkProcess().setFilename(filename).setUploadId(uploadId);CHUNK_PROCESS_STORAGE.put(filename, chunkProcess);}List<ChunkProcess.ChunkPart> chunkList = chunkProcess.getChunkList();String chunkId = fileClient.chunk(chunk, uploadId);chunkList.add(new ChunkProcess.ChunkPart(chunkId, chunk.getChunkNumber()));CHUNK_PROCESS_STORAGE.put(filename, chunkProcess.setChunkList(chunkList));}@Overridepublic void merge(String filename) {ChunkProcess chunkProcess = CHUNK_PROCESS_STORAGE.get(filename);fileClient.merge(chunkProcess);SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");String currentTime = simpleDateFormat.format(new Date());FILE_STORAGE.add(new FileInfo().setUploadTime(currentTime).setFileName(filename));CHUNK_PROCESS_STORAGE.remove(filename);}@Overridepublic List<FileInfo> list() {return FILE_STORAGE;}@Overridepublic Resource getFile(String filename) {return fileClient.getFile(filename);}
}
package com.et.service.impl;import com.et.bean.FileInfo;
import com.et.service.FileUploadService;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.core.io.FileSystemResource;
import org.springframework.core.io.Resource;
import org.springframework.stereotype.Service;
import org.springframework.util.FileCopyUtils;
import org.springframework.web.multipart.MultipartFile;import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.List;
import java.util.concurrent.CopyOnWriteArrayList;@Service
@Slf4j
public class FileUploadServiceImpl implements FileUploadService {@Value("${upload.path:/data/upload/}")private String filePath;private static final List<FileInfo> FILE_STORAGE = new CopyOnWriteArrayList<>();@Overridepublic void upload(MultipartFile[] files) {SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");for (MultipartFile file : files) {String fileName = file.getOriginalFilename();boolean match = FILE_STORAGE.stream().anyMatch(fileInfo -> fileInfo.getFileName().equals(fileName));if (match) {throw new RuntimeException("File [ " + fileName + " ] already exist");}String currentTime = simpleDateFormat.format(new Date());try (InputStream in = file.getInputStream();OutputStream out = Files.newOutputStream(Paths.get(filePath + fileName))) {FileCopyUtils.copy(in, out);} catch (IOException e) {log.error("File [{}] upload failed", fileName, e);throw new RuntimeException(e);}FileInfo fileInfo = new FileInfo().setFileName(fileName).setUploadTime(currentTime);FILE_STORAGE.add(fileInfo);}}@Overridepublic List<FileInfo> list() {return FILE_STORAGE;}@Overridepublic Resource getFile(String fileName) {FILE_STORAGE.stream().filter(info -> info.getFileName().equals(fileName)).findFirst().orElseThrow(() -> new RuntimeException("File [ " + fileName + " ] not exist"));File file = new File(filePath + fileName);return new FileSystemResource(file);}
}

以上只是一些关键代码,所有代码请参见下面代码仓库

代码仓库

  • https://github.com/Harries/springboot-demo(File)

4.测试

  • 启动Sprint Boot应用
  • 编写测试类
package com.et.file;import cn.hutool.core.io.FileUtil;
import org.junit.Test;
import org.springframework.core.io.FileSystemResource;
import org.springframework.http.HttpEntity;
import org.springframework.http.HttpHeaders;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.util.LinkedMultiValueMap;
import org.springframework.util.MultiValueMap;
import org.springframework.web.client.RestTemplate;import java.io.RandomAccessFile;public class MultipartUploadTest {@Testpublic void testUpload() throws Exception {String chunkFileFolder = "D:/tmp/";java.io.File file = new java.io.File("D:/SoftWare/oss-browser-win32-ia32.zip");long contentLength = file.length();// partSize:20MBlong partSize = 20 * 1024 * 1024;// the last partSize may less  20MBlong chunkFileNum = (long) Math.ceil(contentLength * 1.0 / partSize);RestTemplate restTemplate = new RestTemplate();try (RandomAccessFile raf_read = new RandomAccessFile(file, "r")) {// bufferbyte[] b = new byte[1024];for (int i = 1; i <= chunkFileNum; i++) {// chunkjava.io.File chunkFile = new java.io.File(chunkFileFolder + i);// writetry (RandomAccessFile raf_write = new RandomAccessFile(chunkFile, "rw")) {int len;while ((len = raf_read.read(b)) != -1) {raf_write.write(b, 0, len);if (chunkFile.length() >= partSize) {break;}}// uploadMultiValueMap<String, Object> body = new LinkedMultiValueMap<>();body.add("file", new FileSystemResource(chunkFile));body.add("chunkNumber", i);body.add("chunkSize", partSize);body.add("currentChunkSize", chunkFile.length());body.add("totalSize", contentLength);body.add("filename", file.getName());body.add("totalChunks", chunkFileNum);HttpHeaders headers = new HttpHeaders();headers.setContentType(MediaType.MULTIPART_FORM_DATA);HttpEntity<MultiValueMap<String, Object>> requestEntity = new HttpEntity<>(body, headers);String serverUrl = "http://localhost:8080/file/chunk";ResponseEntity<String> response = restTemplate.postForEntity(serverUrl, requestEntity, String.class);System.out.println("Response code: " + response.getStatusCode() + " Response body: " + response.getBody());} finally {FileUtil.del(chunkFile);}}}// merge fileString mergeUrl = "http://localhost:8080/file/merge?filename=" + file.getName();ResponseEntity<String> response = restTemplate.getForEntity(mergeUrl, String.class);System.out.println("Response code: " + response.getStatusCode() + " Response body: " + response.getBody());}
}
  • 运行测试类,日志如下

upload

5.引用

  • Spring Boot中大文件分片上传—支持本地文件和Amazon S3 - Yuandupier
  • Spring Boot实现大文件分块上传 | Harries Blog™

这篇关于Spring Boot实现大文件分块上传的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1140354

相关文章

Java中流式并行操作parallelStream的原理和使用方法

《Java中流式并行操作parallelStream的原理和使用方法》本文详细介绍了Java中的并行流(parallelStream)的原理、正确使用方法以及在实际业务中的应用案例,并指出在使用并行流... 目录Java中流式并行操作parallelStream0. 问题的产生1. 什么是parallelS

C++中unordered_set哈希集合的实现

《C++中unordered_set哈希集合的实现》std::unordered_set是C++标准库中的无序关联容器,基于哈希表实现,具有元素唯一性和无序性特点,本文就来详细的介绍一下unorder... 目录一、概述二、头文件与命名空间三、常用方法与示例1. 构造与析构2. 迭代器与遍历3. 容量相关4

Java中Redisson 的原理深度解析

《Java中Redisson的原理深度解析》Redisson是一个高性能的Redis客户端,它通过将Redis数据结构映射为Java对象和分布式对象,实现了在Java应用中方便地使用Redis,本文... 目录前言一、核心设计理念二、核心架构与通信层1. 基于 Netty 的异步非阻塞通信2. 编解码器三、

C++中悬垂引用(Dangling Reference) 的实现

《C++中悬垂引用(DanglingReference)的实现》C++中的悬垂引用指引用绑定的对象被销毁后引用仍存在的情况,会导致访问无效内存,下面就来详细的介绍一下产生的原因以及如何避免,感兴趣... 目录悬垂引用的产生原因1. 引用绑定到局部变量,变量超出作用域后销毁2. 引用绑定到动态分配的对象,对象

SpringBoot基于注解实现数据库字段回填的完整方案

《SpringBoot基于注解实现数据库字段回填的完整方案》这篇文章主要为大家详细介绍了SpringBoot如何基于注解实现数据库字段回填的相关方法,文中的示例代码讲解详细,感兴趣的小伙伴可以了解... 目录数据库表pom.XMLRelationFieldRelationFieldMapping基础的一些代

一篇文章彻底搞懂macOS如何决定java环境

《一篇文章彻底搞懂macOS如何决定java环境》MacOS作为一个功能强大的操作系统,为开发者提供了丰富的开发工具和框架,下面:本文主要介绍macOS如何决定java环境的相关资料,文中通过代码... 目录方法一:使用 which命令方法二:使用 Java_home工具(Apple 官方推荐)那问题来了,

Java HashMap的底层实现原理深度解析

《JavaHashMap的底层实现原理深度解析》HashMap基于数组+链表+红黑树结构,通过哈希算法和扩容机制优化性能,负载因子与树化阈值平衡效率,是Java开发必备的高效数据结构,本文给大家介绍... 目录一、概述:HashMap的宏观结构二、核心数据结构解析1. 数组(桶数组)2. 链表节点(Node

Java AOP面向切面编程的概念和实现方式

《JavaAOP面向切面编程的概念和实现方式》AOP是面向切面编程,通过动态代理将横切关注点(如日志、事务)与核心业务逻辑分离,提升代码复用性和可维护性,本文给大家介绍JavaAOP面向切面编程的概... 目录一、AOP 是什么?二、AOP 的核心概念与实现方式核心概念实现方式三、Spring AOP 的关

详解SpringBoot+Ehcache使用示例

《详解SpringBoot+Ehcache使用示例》本文介绍了SpringBoot中配置Ehcache、自定义get/set方式,并实际使用缓存的过程,文中通过示例代码介绍的非常详细,对大家的学习或者... 目录摘要概念内存与磁盘持久化存储:配置灵活性:编码示例引入依赖:配置ehcache.XML文件:配置

Java 虚拟线程的创建与使用深度解析

《Java虚拟线程的创建与使用深度解析》虚拟线程是Java19中以预览特性形式引入,Java21起正式发布的轻量级线程,本文给大家介绍Java虚拟线程的创建与使用,感兴趣的朋友一起看看吧... 目录一、虚拟线程简介1.1 什么是虚拟线程?1.2 为什么需要虚拟线程?二、虚拟线程与平台线程对比代码对比示例:三