hadoop从入门到放弃(一)之flume获取数据存入hdfs

2024-06-09 10:32

本文主要是介绍hadoop从入门到放弃(一)之flume获取数据存入hdfs,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

一、解压flume/hadoop/目录下

tar -zxvf apache-flume-1.6.0-bin.tar.gz  -C /hadoop/


二、配置flume配置文件

[hadoop@hadoop01 flume]$ cat conf/agent1.conf# Name the components on this agentagent1.sources = spooldirSourceagent1.channels = fileChannelagent1.sinks = hdfsSink# Describe/configure the sourceagent1.sources.spooldirSource.type=spooldiragent1.sources.spooldirSource.spoolDir=/home/hadoop/spooldir# Describe the sinkagent1.sinks.hdfsSink.type=hdfsagent1.sinks.hdfsSink.hdfs.path=hdfs://hadoop01:9000/flume/%y-%m-%d/%H%M/%Sagent1.sinks.hdfsSink.hdfs.round = trueagent1.sinks.hdfsSink.hdfs.roundValue = 10agent1.sinks.hdfsSink.hdfs.roundUnit = minuteagent1.sinks.hdfsSink.hdfs.useLocalTimeStamp = trueagent1.sinks.hdfsSink.hdfs.fileType=DataStream  # Describe the channelagent1.channels.fileChannel.type = fileagent1.channels.fileChannel.dataDirs=/hadoop/flume/datadir# Bind the source and sink to the channelagent1.sources.spooldirSource.channels=fileChannelagent1.sinks.hdfsSink.channel=fileChannel


三、启动flume

进入flume home目录

bin/flume-ng agent --conf conf --conf-file conf/agent1.conf --name agent1 -Dflume.root.logger=INFO,console

 

启动成功有如下输出

....................................2016-08-09 16:28:33,888 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:173)] Starting Sink hdfsSink2016-08-09 16:28:33,891 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:184)] Starting Source spooldirSource2016-08-09 16:28:33,891 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.SpoolDirectorySource.start(SpoolDirectorySource.java:78)] SpoolDirectorySource source starting with directory: /home/hadoop/spooldir2016-08-09 16:28:33,900 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SINK, name: hdfsSink: Successfully registered new MBean.2016-08-09 16:28:33,900 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SINK, name: hdfsSink started2016-08-09 16:28:33,925 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SOURCE, name: spooldirSource: Successfully registered new MBean.2016-08-09 16:28:33,925 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SOURCE, name: spooldirSource started


 

四、将日志写入到flume spooldir

写入完成之后可以看到如下输出:

2016-08-09 16:36:51,204 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.2016-08-09 16:36:51,204 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /home/hadoop/spooldir/HTTP_20130313143750.dat to /home/hadoop/spooldir/HTTP_20130313143750.dat.COMPLETED2016-08-09 16:36:53,965 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:58)] Serializer = TEXT, UseRawLocalFileSystem = false2016-08-09 16:36:54,206 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813966.tmp2016-08-09 16:36:56,772 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:363)] Closing hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813966.tmp2016-08-09 16:36:56,903 (hdfs-hdfsSink-call-runner-3) [INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:629)] Renaming hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813966.tmp to hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.14707318139662016-08-09 16:36:57,149 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813967.tmp2016-08-09 16:36:57,637 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:363)] Closing hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813967.tmp2016-08-09 16:36:57,805 (hdfs-hdfsSink-call-runner-7) [INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:629)] Renaming hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813967.tmp to hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.14707318139672016-08-09 16:36:57,955 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813968.tmp2016-08-09 16:37:03,525 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.EventQueueBackingStoreFile.beginCheckpoint(EventQueueBackingStoreFile.java:230)] Start checkpoint for /home/hadoop/.flume/file-channel/checkpoint/checkpoint, elements to sync = 222016-08-09 16:37:03,566 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint(EventQueueBackingStoreFile.java:255)] Updating checkpoint metadata: logWriteOrderID: 1470731313610, queueSize: 0, queueHead: 202016-08-09 16:37:03,572 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:1034)] Updated checkpoint for file: /hadoop/flume/datadir/log-5 position: 4155 logWriteOrderID: 14707313136102016-08-09 16:37:03,572 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.LogFile$RandomReader.close(LogFile.java:504)] Closing RandomReader /hadoop/flume/datadir/log-32016-08-09 16:37:28,072 (hdfs-hdfsSink-roll-timer-0) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:363)] Closing hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813968.tmp2016-08-09 16:37:28,182 (hdfs-hdfsSink-call-runner-3) [INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:629)] Renaming hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813968.tmp to hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.14707318139682016-08-09 16:37:28,364 (hdfs-hdfsSink-roll-timer-0) [INFO - org.apache.flume.sink.hdfs.HDFSEventSink$1.run(HDFSEventSink.java:394)] Writer callback called.


 

 

查看hdfs上相应目录

[hadoop@hadoop01 spooldir]$ hadoop fs -ls /flume/16-08-09/1630/00Found 3 items-rw-r--r--   3 hadoop supergroup        969 2016-08-09 16:36 /flume/16-08-09/1630/00/FlumeData.1470731813966-rw-r--r--   3 hadoop supergroup       1070 2016-08-09 16:36 /flume/16-08-09/1630/00/FlumeData.1470731813967-rw-r--r--   3 hadoop supergroup        191 2016-08-09 16:37 /flume/16-08-09/1630/00/FlumeData.1470731813968


 

查看hdfs上文件内容

[hadoop@hadoop01 spooldir]$ hadoop fs -cat /flume/16-08-09/1630/00/*1363157985066 13726230503 00-FD-07-A4-72-B8:CMCC 120.196.100.82 i02.c.aliimg.com 24 27 2481 24682001363157995052 13826544101 5C-0E-8B-C7-F1-E0:CMCC 120.197.40.4 4 0 264 0 2001363157991076 13926435656 20-10-7A-28-CC-0A:CMCC 120.196.100.99 2 4 132 1512 2001363154400022 13926251106 5C-0E-8B-8B-B1-50:CMCC 120.197.40.4 4 0 240 0 2001363157993044 18211575961 94-71-AC-CD-E6-18:CMCC-EASY 120.196.100.99 iface.qiyi.com 视频网站 15 12 15272106 2001363157995074 84138413 5C-0E-8B-8C-E8-20:7DaysInn 120.197.40.4 122.72.52.12 20 16 4116 14322001363157993055 13560439658 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 18 15 1116 954 2001363157995033 15920133257 5C-0E-8B-C7-BA-20:CMCC 120.197.40.4 sug.so.360.cn 信息安全 20 20 3156 29362001363157983019 13719199419 68-A1-B7-03-07-B1:CMCC-EASY 120.196.100.82 4 0 240 0 2001363157984041 13660577991 5C-0E-8B-92-5C-20:CMCC-EASY 120.197.40.4 s19.cnzz.com 站点统计 24 9 6960690 2001363157973098 15013685858 5C-0E-8B-C7-F7-90:CMCC 120.197.40.4 rank.ie.sogou.com 搜索引擎 28 27 36593538 2001363157986029 15989002119 E8-99-C4-4E-93-E0:CMCC-EASY 120.196.100.99 www.umeng.com 站点统计 3 3 1938180 2001363157992093 13560439658 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 15 9 918 4938 2001363157986041 13480253104 5C-0E-8B-C7-FC-80:CMCC-EASY 120.197.40.4 3 3 180 180 2001363157984040 13602846565 5C-0E-8B-8B-B6-00:CMCC 120.197.40.4 2052.flash2-http.qq.com 综合门户 15 12 19382910 2001363157995093 13922314466 00-FD-07-A2-EC-BA:CMCC 120.196.100.82 img.qfc.cn 12 12 3008 3720 2001363157982040 13502468823 5C-0A-5B-6A-0B-D4:CMCC-EASY 120.196.100.99 y0.ifengimg.com 综合门户 57 102 7335110349 2001363157986072 18320173382 84-25-DB-4F-10-1A:CMCC-EASY 120.196.100.99 input.shouji.sogou.com 搜索引擎 21 18  9531 2412 2001363157990043 13925057413 00-1F-64-E1-E6-9A:CMCC 120.196.100.55 t3.baidu.com 搜索引擎 69 63 11058 48242001363157988072 13760778710 00-FD-07-A4-7B-08:CMCC 120.196.100.82 2 2 120 120 2001363157985066 13726238888 00-FD-07-A4-72-B8:CMCC 120.196.100.82 i02.c.aliimg.com 24 27 2481 24682001363157993055 13560436666 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 18 15 1116 954 200[hadoop@hadoop01 spooldir]$ 


上传成功

这篇关于hadoop从入门到放弃(一)之flume获取数据存入hdfs的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1044923

相关文章

Spring Boot 与微服务入门实战详细总结

《SpringBoot与微服务入门实战详细总结》本文讲解SpringBoot框架的核心特性如快速构建、自动配置、零XML与微服务架构的定义、演进及优缺点,涵盖开发环境准备和HelloWorld实战... 目录一、Spring Boot 核心概述二、微服务架构详解1. 微服务的定义与演进2. 微服务的优缺点三

从入门到精通详解LangChain加载HTML内容的全攻略

《从入门到精通详解LangChain加载HTML内容的全攻略》这篇文章主要为大家详细介绍了如何用LangChain优雅地处理HTML内容,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... 目录引言:当大语言模型遇见html一、HTML加载器为什么需要专门的HTML加载器核心加载器对比表二

从入门到进阶讲解Python自动化Playwright实战指南

《从入门到进阶讲解Python自动化Playwright实战指南》Playwright是针对Python语言的纯自动化工具,它可以通过单个API自动执行Chromium,Firefox和WebKit... 目录Playwright 简介核心优势安装步骤观点与案例结合Playwright 核心功能从零开始学习

从入门到精通MySQL联合查询

《从入门到精通MySQL联合查询》:本文主要介绍从入门到精通MySQL联合查询,本文通过实例代码给大家介绍的非常详细,需要的朋友可以参考下... 目录摘要1. 多表联合查询时mysql内部原理2. 内连接3. 外连接4. 自连接5. 子查询6. 合并查询7. 插入查询结果摘要前面我们学习了数据库设计时要满

从入门到精通C++11 <chrono> 库特性

《从入门到精通C++11<chrono>库特性》chrono库是C++11中一个非常强大和实用的库,它为时间处理提供了丰富的功能和类型安全的接口,通过本文的介绍,我们了解了chrono库的基本概念... 目录一、引言1.1 为什么需要<chrono>库1.2<chrono>库的基本概念二、时间段(Durat

解析C++11 static_assert及与Boost库的关联从入门到精通

《解析C++11static_assert及与Boost库的关联从入门到精通》static_assert是C++中强大的编译时验证工具,它能够在编译阶段拦截不符合预期的类型或值,增强代码的健壮性,通... 目录一、背景知识:传统断言方法的局限性1.1 assert宏1.2 #error指令1.3 第三方解决

从入门到精通MySQL 数据库索引(实战案例)

《从入门到精通MySQL数据库索引(实战案例)》索引是数据库的目录,提升查询速度,主要类型包括BTree、Hash、全文、空间索引,需根据场景选择,建议用于高频查询、关联字段、排序等,避免重复率高或... 目录一、索引是什么?能干嘛?核心作用:二、索引的 4 种主要类型(附通俗例子)1. BTree 索引(

Redis 配置文件使用建议redis.conf 从入门到实战

《Redis配置文件使用建议redis.conf从入门到实战》Redis配置方式包括配置文件、命令行参数、运行时CONFIG命令,支持动态修改参数及持久化,常用项涉及端口、绑定、内存策略等,版本8... 目录一、Redis.conf 是什么?二、命令行方式传参(适用于测试)三、运行时动态修改配置(不重启服务

MySQL DQL从入门到精通

《MySQLDQL从入门到精通》通过DQL,我们可以从数据库中检索出所需的数据,进行各种复杂的数据分析和处理,本文将深入探讨MySQLDQL的各个方面,帮助你全面掌握这一重要技能,感兴趣的朋友跟随小... 目录一、DQL 基础:SELECT 语句入门二、数据过滤:WHERE 子句的使用三、结果排序:ORDE

Python中OpenCV与Matplotlib的图像操作入门指南

《Python中OpenCV与Matplotlib的图像操作入门指南》:本文主要介绍Python中OpenCV与Matplotlib的图像操作指南,本文通过实例代码给大家介绍的非常详细,对大家的学... 目录一、环境准备二、图像的基本操作1. 图像读取、显示与保存 使用OpenCV操作2. 像素级操作3.