2.2.2 hadoop体系之离线计算-mapreduce分布式计算-WordCount案例

本文主要是介绍2.2.2 hadoop体系之离线计算-mapreduce分布式计算-WordCount案例,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

目录

1.需求

2.数据准备

2.1 创建一个新文件

2.2 其中放入内容并保存

2.3 上传到HDFS系统

3.IDEA写程序

3.1 pom

3.2 Mapper

3.3 Reduce

3.4 定义主类,描述Job并且提交Job

3.5 在IDEA中打包成jar包,上传到node01中的 /export/software中

4.运行jar包,并且查看运行情况


1.需求

        在一堆给定的文本文件中统计输出每一个单词出现的总次数

2.数据准备

2.1 创建一个新文件

cd /export/servers
vim wordcount.txt

2.2 其中放入内容并保存

hello,world,hadoop
hive,sqoop,flume,hello
kitty,tom,jerry,world
hadoop

2.3 上传到HDFS系统

hdfs dfs ‐mkdir /wordcount/
hdfs dfs ‐put wordcount.txt /wordcount/

3.IDEA写程序

3.1 pom

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>cn.itcast</groupId><artifactId>day03_mapreduce_wordcount</artifactId><version>1.0-SNAPSHOT</version><packaging>jar</packaging><build><plugins><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-compiler-plugin</artifactId><configuration><source>6</source><target>6</target></configuration></plugin></plugins></build><repositories><repository><id>cloudera</id><url>https://repository.cloudera.com/artifactory/cloudera-repos/</url></repository></repositories><dependencies><dependency><groupId>jdk.tools</groupId><artifactId>jdk.tools</artifactId><version>1.8</version><scope>system</scope><systemPath>${JAVA_HOME}/lib/tools.jar</systemPath></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-common</artifactId><version>3.0.0</version><scope>provided</scope></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-hdfs</artifactId><version>3.0.0</version></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-hdfs-client</artifactId><version>3.0.0</version><scope>provided</scope></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-client</artifactId><version>3.0.0</version></dependency><dependency><groupId>junit</groupId><artifactId>junit</artifactId><version>4.12</version><scope>test</scope></dependency><dependency><groupId>org.junit.jupiter</groupId><artifactId>junit-jupiter</artifactId><version>RELEASE</version><scope>compile</scope></dependency></dependencies></project>

3.2 Mapper

package com.ucas.mapredece;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;/*** @author GONG* @version 1.0* @date 2020/10/8 23:19*/
public class WordCountMapper extends Mapper<LongWritable, Text, Text, LongWritable> {@Overridepublic void map(LongWritable key, Text value, Context context) throwsIOException, InterruptedException {String line = value.toString();String[] split = line.split(",");for (String word : split) {context.write(new Text(word), new LongWritable(1));}}
}

3.3 Reduce

package com.ucas.mapredece;import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;import java.io.IOException;/*** @author GONG* @version 1.0* @date 2020/10/8 23:20*/
class WordCountReducer extends Reducer<Text, LongWritable, Text, LongWritable> {@Overrideprotected void reduce(Text key, Iterable<LongWritable> values,Context context) throws IOException, InterruptedException {long count = 0;for (LongWritable value : values) {count += value.get();}context.write(key, new LongWritable(count));}
}

3.4 定义主类,描述Job并且提交Job

package com.ucas.mapredece;import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.apache.hadoop.conf.Configured;public class JobMain extends Configured implements Tool {@Overridepublic int run(String[] args) throws Exception {Job job = Job.getInstance(super.getConf(), JobMain.class.getSimpleName());//打包到集群上面运行时候,必须要添加以下配置,指定程序的main函数job.setJarByClass(JobMain.class);//第一步:读取输入文件解析成key,value对job.setInputFormatClass(TextInputFormat.class);TextInputFormat.addInputPath(job, new Path("hdfs://192.168.0.101:8020/wordcount"));//第二步:设置我们的mapper类job.setMapperClass(WordCountMapper.class);//设置我们map阶段完成之后的输出类型job.setMapOutputKeyClass(Text.class);job.setMapOutputValueClass(LongWritable.class);//第三步,第四步,第五步,第六步,省略//第七步:设置我们的reduce类job.setReducerClass(WordCountReducer.class);//设置我们reduce阶段完成之后的输出类型job.setOutputKeyClass(Text.class);job.setOutputValueClass(LongWritable.class);//第八步:设置输出类以及输出路径job.setOutputFormatClass(TextOutputFormat.class);TextOutputFormat.setOutputPath(job, new Path("hdfs://192.168.0.101:8020/wordcount_out"));//上面那个路径时不允许存在的,会帮我们自动创建这个文件夹boolean b = job.waitForCompletion(true);return b ? 0 : 1;}/*** 程序main函数的入口类** @param args* @throws Exception*/public static void main(String[] args) throws Exception {Configuration configuration = new Configuration();Tool tool = new JobMain();int run = ToolRunner.run(configuration, tool, args);System.exit(run);}
}

3.5 在IDEA中打包成jar包,上传到node01中 /export/software中

4.运行jar包,并且查看运行情况

进入:cd /export/software

运行命令: hadoop jar day03_mapreduce_wordcount-1.0-SNAPSHOT.jar com.ucas.mapredece.JobMain

[root@node01 software]# hadoop jar day03_mapreduce_wordcount-1.0-SNAPSHOT.jar com.ucas.mapredece.JobMain
2020-10-09 20:47:59,083 INFO client.RMProxy: Connecting to ResourceManager at node01/192.168.0.101:8032
2020-10-09 20:48:00,154 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1602247634978_0001
2020-10-09 20:48:01,299 INFO input.FileInputFormat: Total input files to process : 1
2020-10-09 20:48:01,532 INFO mapreduce.JobSubmitter: number of splits:1
2020-10-09 20:48:01,592 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2020-10-09 20:48:01,892 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1602247634978_0001
2020-10-09 20:48:01,894 INFO mapreduce.JobSubmitter: Executing with tokens: []
2020-10-09 20:48:02,961 INFO conf.Configuration: resource-types.xml not found
2020-10-09 20:48:02,961 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2020-10-09 20:48:03,741 INFO impl.YarnClientImpl: Submitted application application_1602247634978_0001
2020-10-09 20:48:03,825 INFO mapreduce.Job: The url to track the job: http://node01:8088/proxy/application_1602247634978_0001/
2020-10-09 20:48:03,826 INFO mapreduce.Job: Running job: job_1602247634978_0001
2020-10-09 20:48:19,613 INFO mapreduce.Job: Job job_1602247634978_0001 running in uber mode : false
2020-10-09 20:48:19,642 INFO mapreduce.Job:  map 0% reduce 0%
2020-10-09 20:48:28,806 INFO mapreduce.Job:  map 100% reduce 0%
2020-10-09 20:48:34,851 INFO mapreduce.Job:  map 100% reduce 100%
2020-10-09 20:48:35,916 INFO mapreduce.Job: Job job_1602247634978_0001 completed successfully
2020-10-09 20:48:36,200 INFO mapreduce.Job: Counters: 53File System CountersFILE: Number of bytes read=197FILE: Number of bytes written=431667FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=185HDFS: Number of bytes written=70HDFS: Number of read operations=8HDFS: Number of large read operations=0HDFS: Number of write operations=2Job Counters Launched map tasks=1Launched reduce tasks=1Data-local map tasks=1Total time spent by all maps in occupied slots (ms)=6124Total time spent by all reduces in occupied slots (ms)=3936Total time spent by all map tasks (ms)=6124Total time spent by all reduce tasks (ms)=3936Total vcore-milliseconds taken by all map tasks=6124Total vcore-milliseconds taken by all reduce tasks=3936Total megabyte-milliseconds taken by all map tasks=6270976Total megabyte-milliseconds taken by all reduce tasks=4030464Map-Reduce FrameworkMap input records=4Map output records=12Map output bytes=167Map output materialized bytes=197Input split bytes=114Combine input records=0Combine output records=0Reduce input groups=9Reduce shuffle bytes=197Reduce input records=12Reduce output records=9Spilled Records=24Shuffled Maps =1Failed Shuffles=0Merged Map outputs=1GC time elapsed (ms)=168CPU time spent (ms)=2310Physical memory (bytes) snapshot=487010304Virtual memory (bytes) snapshot=4846088192Total committed heap usage (bytes)=302223360Peak Map Physical memory (bytes)=372805632Peak Map Virtual memory (bytes)=2409140224Peak Reduce Physical memory (bytes)=114204672Peak Reduce Virtual memory (bytes)=2436947968Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format Counters Bytes Read=71File Output Format Counters Bytes Written=70
[root@node01 software]# 

运行结果:

这篇关于2.2.2 hadoop体系之离线计算-mapreduce分布式计算-WordCount案例的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/754513

相关文章

Python实现精确小数计算的完全指南

《Python实现精确小数计算的完全指南》在金融计算、科学实验和工程领域,浮点数精度问题一直是开发者面临的重大挑战,本文将深入解析Python精确小数计算技术体系,感兴趣的小伙伴可以了解一下... 目录引言:小数精度问题的核心挑战一、浮点数精度问题分析1.1 浮点数精度陷阱1.2 浮点数误差来源二、基础解决

深度解析Java @Serial 注解及常见错误案例

《深度解析Java@Serial注解及常见错误案例》Java14引入@Serial注解,用于编译时校验序列化成员,替代传统方式解决运行时错误,适用于Serializable类的方法/字段,需注意签... 目录Java @Serial 注解深度解析1. 注解本质2. 核心作用(1) 主要用途(2) 适用位置3

Java 正则表达式的使用实战案例

《Java正则表达式的使用实战案例》本文详细介绍了Java正则表达式的使用方法,涵盖语法细节、核心类方法、高级特性及实战案例,本文给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要... 目录一、正则表达式语法详解1. 基础字符匹配2. 字符类([]定义)3. 量词(控制匹配次数)4. 边

Python Counter 函数使用案例

《PythonCounter函数使用案例》Counter是collections模块中的一个类,专门用于对可迭代对象中的元素进行计数,接下来通过本文给大家介绍PythonCounter函数使用案例... 目录一、Counter函数概述二、基本使用案例(一)列表元素计数(二)字符串字符计数(三)元组计数三、C

Python文本相似度计算的方法大全

《Python文本相似度计算的方法大全》文本相似度是指两个文本在内容、结构或语义上的相近程度,通常用0到1之间的数值表示,0表示完全不同,1表示完全相同,本文将深入解析多种文本相似度计算方法,帮助您选... 目录前言什么是文本相似度?1. Levenshtein 距离(编辑距离)核心公式实现示例2. Jac

Spring Boot 整合 SSE(Server-Sent Events)实战案例(全网最全)

《SpringBoot整合SSE(Server-SentEvents)实战案例(全网最全)》本文通过实战案例讲解SpringBoot整合SSE技术,涵盖实现原理、代码配置、异常处理及前端交互,... 目录Spring Boot 整合 SSE(Server-Sent Events)1、简述SSE与其他技术的对

MySQL 临时表与复制表操作全流程案例

《MySQL临时表与复制表操作全流程案例》本文介绍MySQL临时表与复制表的区别与使用,涵盖生命周期、存储机制、操作限制、创建方法及常见问题,本文结合实例代码给大家介绍的非常详细,感兴趣的朋友跟随小... 目录一、mysql 临时表(一)核心特性拓展(二)操作全流程案例1. 复杂查询中的临时表应用2. 临时

MySQL 数据库表与查询操作实战案例

《MySQL数据库表与查询操作实战案例》本文将通过实际案例,详细介绍MySQL中数据库表的设计、数据插入以及常用的查询操作,帮助初学者快速上手,感兴趣的朋友跟随小编一起看看吧... 目录mysql 数据库表操作与查询实战案例项目一:产品相关数据库设计与创建一、数据库及表结构设计二、数据库与表的创建项目二:员

Python中经纬度距离计算的实现方式

《Python中经纬度距离计算的实现方式》文章介绍Python中计算经纬度距离的方法及中国加密坐标系转换工具,主要方法包括geopy(Vincenty/Karney)、Haversine、pyproj... 目录一、基本方法1. 使用geopy库(推荐)2. 手动实现 Haversine 公式3. 使用py

C#中的Drawing 类案例详解

《C#中的Drawing类案例详解》文章解析WPF与WinForms的Drawing类差异,涵盖命名空间、继承链、常用类及应用场景,通过案例展示如何创建带阴影圆角矩形按钮,强调WPF的轻量、可动画特... 目录一、Drawing 是什么?二、典型用法三、案例:画一个“带阴影的圆角矩形按钮”四、WinForm