hadoop从入门到放弃(一)之flume获取数据存入hdfs

2024-06-09 10:32

本文主要是介绍hadoop从入门到放弃(一)之flume获取数据存入hdfs,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

一、解压flume/hadoop/目录下

tar -zxvf apache-flume-1.6.0-bin.tar.gz  -C /hadoop/


二、配置flume配置文件

[hadoop@hadoop01 flume]$ cat conf/agent1.conf# Name the components on this agentagent1.sources = spooldirSourceagent1.channels = fileChannelagent1.sinks = hdfsSink# Describe/configure the sourceagent1.sources.spooldirSource.type=spooldiragent1.sources.spooldirSource.spoolDir=/home/hadoop/spooldir# Describe the sinkagent1.sinks.hdfsSink.type=hdfsagent1.sinks.hdfsSink.hdfs.path=hdfs://hadoop01:9000/flume/%y-%m-%d/%H%M/%Sagent1.sinks.hdfsSink.hdfs.round = trueagent1.sinks.hdfsSink.hdfs.roundValue = 10agent1.sinks.hdfsSink.hdfs.roundUnit = minuteagent1.sinks.hdfsSink.hdfs.useLocalTimeStamp = trueagent1.sinks.hdfsSink.hdfs.fileType=DataStream  # Describe the channelagent1.channels.fileChannel.type = fileagent1.channels.fileChannel.dataDirs=/hadoop/flume/datadir# Bind the source and sink to the channelagent1.sources.spooldirSource.channels=fileChannelagent1.sinks.hdfsSink.channel=fileChannel


三、启动flume

进入flume home目录

bin/flume-ng agent --conf conf --conf-file conf/agent1.conf --name agent1 -Dflume.root.logger=INFO,console

 

启动成功有如下输出

....................................2016-08-09 16:28:33,888 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:173)] Starting Sink hdfsSink2016-08-09 16:28:33,891 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:184)] Starting Source spooldirSource2016-08-09 16:28:33,891 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.SpoolDirectorySource.start(SpoolDirectorySource.java:78)] SpoolDirectorySource source starting with directory: /home/hadoop/spooldir2016-08-09 16:28:33,900 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SINK, name: hdfsSink: Successfully registered new MBean.2016-08-09 16:28:33,900 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SINK, name: hdfsSink started2016-08-09 16:28:33,925 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SOURCE, name: spooldirSource: Successfully registered new MBean.2016-08-09 16:28:33,925 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SOURCE, name: spooldirSource started


 

四、将日志写入到flume spooldir

写入完成之后可以看到如下输出:

2016-08-09 16:36:51,204 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.2016-08-09 16:36:51,204 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /home/hadoop/spooldir/HTTP_20130313143750.dat to /home/hadoop/spooldir/HTTP_20130313143750.dat.COMPLETED2016-08-09 16:36:53,965 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:58)] Serializer = TEXT, UseRawLocalFileSystem = false2016-08-09 16:36:54,206 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813966.tmp2016-08-09 16:36:56,772 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:363)] Closing hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813966.tmp2016-08-09 16:36:56,903 (hdfs-hdfsSink-call-runner-3) [INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:629)] Renaming hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813966.tmp to hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.14707318139662016-08-09 16:36:57,149 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813967.tmp2016-08-09 16:36:57,637 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:363)] Closing hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813967.tmp2016-08-09 16:36:57,805 (hdfs-hdfsSink-call-runner-7) [INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:629)] Renaming hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813967.tmp to hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.14707318139672016-08-09 16:36:57,955 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813968.tmp2016-08-09 16:37:03,525 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.EventQueueBackingStoreFile.beginCheckpoint(EventQueueBackingStoreFile.java:230)] Start checkpoint for /home/hadoop/.flume/file-channel/checkpoint/checkpoint, elements to sync = 222016-08-09 16:37:03,566 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint(EventQueueBackingStoreFile.java:255)] Updating checkpoint metadata: logWriteOrderID: 1470731313610, queueSize: 0, queueHead: 202016-08-09 16:37:03,572 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:1034)] Updated checkpoint for file: /hadoop/flume/datadir/log-5 position: 4155 logWriteOrderID: 14707313136102016-08-09 16:37:03,572 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.LogFile$RandomReader.close(LogFile.java:504)] Closing RandomReader /hadoop/flume/datadir/log-32016-08-09 16:37:28,072 (hdfs-hdfsSink-roll-timer-0) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:363)] Closing hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813968.tmp2016-08-09 16:37:28,182 (hdfs-hdfsSink-call-runner-3) [INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:629)] Renaming hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813968.tmp to hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.14707318139682016-08-09 16:37:28,364 (hdfs-hdfsSink-roll-timer-0) [INFO - org.apache.flume.sink.hdfs.HDFSEventSink$1.run(HDFSEventSink.java:394)] Writer callback called.


 

 

查看hdfs上相应目录

[hadoop@hadoop01 spooldir]$ hadoop fs -ls /flume/16-08-09/1630/00Found 3 items-rw-r--r--   3 hadoop supergroup        969 2016-08-09 16:36 /flume/16-08-09/1630/00/FlumeData.1470731813966-rw-r--r--   3 hadoop supergroup       1070 2016-08-09 16:36 /flume/16-08-09/1630/00/FlumeData.1470731813967-rw-r--r--   3 hadoop supergroup        191 2016-08-09 16:37 /flume/16-08-09/1630/00/FlumeData.1470731813968


 

查看hdfs上文件内容

[hadoop@hadoop01 spooldir]$ hadoop fs -cat /flume/16-08-09/1630/00/*1363157985066 13726230503 00-FD-07-A4-72-B8:CMCC 120.196.100.82 i02.c.aliimg.com 24 27 2481 24682001363157995052 13826544101 5C-0E-8B-C7-F1-E0:CMCC 120.197.40.4 4 0 264 0 2001363157991076 13926435656 20-10-7A-28-CC-0A:CMCC 120.196.100.99 2 4 132 1512 2001363154400022 13926251106 5C-0E-8B-8B-B1-50:CMCC 120.197.40.4 4 0 240 0 2001363157993044 18211575961 94-71-AC-CD-E6-18:CMCC-EASY 120.196.100.99 iface.qiyi.com 视频网站 15 12 15272106 2001363157995074 84138413 5C-0E-8B-8C-E8-20:7DaysInn 120.197.40.4 122.72.52.12 20 16 4116 14322001363157993055 13560439658 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 18 15 1116 954 2001363157995033 15920133257 5C-0E-8B-C7-BA-20:CMCC 120.197.40.4 sug.so.360.cn 信息安全 20 20 3156 29362001363157983019 13719199419 68-A1-B7-03-07-B1:CMCC-EASY 120.196.100.82 4 0 240 0 2001363157984041 13660577991 5C-0E-8B-92-5C-20:CMCC-EASY 120.197.40.4 s19.cnzz.com 站点统计 24 9 6960690 2001363157973098 15013685858 5C-0E-8B-C7-F7-90:CMCC 120.197.40.4 rank.ie.sogou.com 搜索引擎 28 27 36593538 2001363157986029 15989002119 E8-99-C4-4E-93-E0:CMCC-EASY 120.196.100.99 www.umeng.com 站点统计 3 3 1938180 2001363157992093 13560439658 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 15 9 918 4938 2001363157986041 13480253104 5C-0E-8B-C7-FC-80:CMCC-EASY 120.197.40.4 3 3 180 180 2001363157984040 13602846565 5C-0E-8B-8B-B6-00:CMCC 120.197.40.4 2052.flash2-http.qq.com 综合门户 15 12 19382910 2001363157995093 13922314466 00-FD-07-A2-EC-BA:CMCC 120.196.100.82 img.qfc.cn 12 12 3008 3720 2001363157982040 13502468823 5C-0A-5B-6A-0B-D4:CMCC-EASY 120.196.100.99 y0.ifengimg.com 综合门户 57 102 7335110349 2001363157986072 18320173382 84-25-DB-4F-10-1A:CMCC-EASY 120.196.100.99 input.shouji.sogou.com 搜索引擎 21 18  9531 2412 2001363157990043 13925057413 00-1F-64-E1-E6-9A:CMCC 120.196.100.55 t3.baidu.com 搜索引擎 69 63 11058 48242001363157988072 13760778710 00-FD-07-A4-7B-08:CMCC 120.196.100.82 2 2 120 120 2001363157985066 13726238888 00-FD-07-A4-72-B8:CMCC 120.196.100.82 i02.c.aliimg.com 24 27 2481 24682001363157993055 13560436666 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 18 15 1116 954 200[hadoop@hadoop01 spooldir]$ 


上传成功

这篇关于hadoop从入门到放弃(一)之flume获取数据存入hdfs的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1044923

相关文章

从入门到精通详解Python虚拟环境完全指南

《从入门到精通详解Python虚拟环境完全指南》Python虚拟环境是一个独立的Python运行环境,它允许你为不同的项目创建隔离的Python环境,下面小编就来和大家详细介绍一下吧... 目录什么是python虚拟环境一、使用venv创建和管理虚拟环境1.1 创建虚拟环境1.2 激活虚拟环境1.3 验证虚

Java List 使用举例(从入门到精通)

《JavaList使用举例(从入门到精通)》本文系统讲解JavaList,涵盖基础概念、核心特性、常用实现(如ArrayList、LinkedList)及性能对比,介绍创建、操作、遍历方法,结合实... 目录一、List 基础概念1.1 什么是 List?1.2 List 的核心特性1.3 List 家族成

c++日志库log4cplus快速入门小结

《c++日志库log4cplus快速入门小结》文章浏览阅读1.1w次,点赞9次,收藏44次。本文介绍Log4cplus,一种适用于C++的线程安全日志记录API,提供灵活的日志管理和配置控制。文章涵盖... 目录简介日志等级配置文件使用关于初始化使用示例总结参考资料简介log4j 用于Java,log4c

史上最全MybatisPlus从入门到精通

《史上最全MybatisPlus从入门到精通》MyBatis-Plus是MyBatis增强工具,简化开发并提升效率,支持自动映射表名/字段与实体类,提供条件构造器、多种查询方式(等值/范围/模糊/分页... 目录1.简介2.基础篇2.1.通用mapper接口操作2.2.通用service接口操作3.进阶篇3

Python自定义异常的全面指南(入门到实践)

《Python自定义异常的全面指南(入门到实践)》想象你正在开发一个银行系统,用户转账时余额不足,如果直接抛出ValueError,调用方很难区分是金额格式错误还是余额不足,这正是Python自定义异... 目录引言:为什么需要自定义异常一、异常基础:先搞懂python的异常体系1.1 异常是什么?1.2

Python实现Word转PDF全攻略(从入门到实战)

《Python实现Word转PDF全攻略(从入门到实战)》在数字化办公场景中,Word文档的跨平台兼容性始终是个难题,而PDF格式凭借所见即所得的特性,已成为文档分发和归档的标准格式,下面小编就来和大... 目录一、为什么需要python处理Word转PDF?二、主流转换方案对比三、五套实战方案详解方案1:

Spring WebClient从入门到精通

《SpringWebClient从入门到精通》本文详解SpringWebClient非阻塞响应式特性及优势,涵盖核心API、实战应用与性能优化,对比RestTemplate,为微服务通信提供高效解决... 目录一、WebClient 概述1.1 为什么选择 WebClient?1.2 WebClient 与

Spring Boot 与微服务入门实战详细总结

《SpringBoot与微服务入门实战详细总结》本文讲解SpringBoot框架的核心特性如快速构建、自动配置、零XML与微服务架构的定义、演进及优缺点,涵盖开发环境准备和HelloWorld实战... 目录一、Spring Boot 核心概述二、微服务架构详解1. 微服务的定义与演进2. 微服务的优缺点三

从入门到精通详解LangChain加载HTML内容的全攻略

《从入门到精通详解LangChain加载HTML内容的全攻略》这篇文章主要为大家详细介绍了如何用LangChain优雅地处理HTML内容,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... 目录引言:当大语言模型遇见html一、HTML加载器为什么需要专门的HTML加载器核心加载器对比表二

从入门到进阶讲解Python自动化Playwright实战指南

《从入门到进阶讲解Python自动化Playwright实战指南》Playwright是针对Python语言的纯自动化工具,它可以通过单个API自动执行Chromium,Firefox和WebKit... 目录Playwright 简介核心优势安装步骤观点与案例结合Playwright 核心功能从零开始学习