hadoop - Reducer is not being called -



hadoop - Reducer is not being called -

this code ebola info set. here reducer not beingness called @ all. mapper output beingness printed.

the driver class:

import org.apache.hadoop.conf.configuration; import org.apache.hadoop.fs.path; import org.apache.hadoop.io.text; import org.apache.hadoop.mapreduce.*; import org.apache.hadoop.mapreduce.lib.input.fileinputformat; import org.apache.hadoop.mapreduce.lib.input.keyvaluetextinputformat; import org.apache.hadoop.mapreduce.lib.output.fileoutputformat; import org.apache.hadoop.mapreduce.lib.output.textoutputformat; public class ebola { public static void main(string[] args) throws exception , arrayindexoutofboundsexception{ configuration con1 = new configuration(); con1.set("mapreduce.input.keyvaluelinerecordreader.key.value.separator", " "); job job1 = new job(con1, "ebola"); job1.setjarbyclass(ebola.class); job1.setinputformatclass(keyvaluetextinputformat.class); job1.setoutputformatclass(textoutputformat.class); job1.setoutputkeyclass(text.class); job1.setoutputvalueclass(text.class); job1.setmapperclass(ebolamapper.class); job1.setreducerclass(ebolreducer.class); fileinputformat.addinputpath(job1, new path(args[0])); fileoutputformat.setoutputpath(job1, new path(args[1])); job1.waitforcompletion(true); } }

this mapper:

import java.io.ioexception; import org.apache.hadoop.io.*; import org.apache.hadoop.mapreduce.mapper; public class ebolamapper extends mapper <text, text, text, text> { public void map(text key, text value, context con) throws ioexception, interruptedexception { text cumvalues = new text(); string record = value.tostring(); string p[] = record.split(" ",2); string cases = p[0]; string death = p[1]; string cvalues = death + "->" + cases; cumvalues.set(cvalues); con.write(key, cumvalues); } }

finally, reducer:

import java.io.ioexception; import org.apache.hadoop.io.text; import org.apache.hadoop.mapreduce.reducer; public class ebolreducer extends reducer<text, text, text, text> { public void reduce(text key, text value, context con) throws ioexception{ text cumulvalues = new text(); string cumval = value.tostring(); string[] p = cumval.split("->",2); string death = p[0]; string cases = p[1]; float d = float.parsefloat(death); float c = float.parsefloat(cases); float perc = (d/c)*100; string percent = string.valueof(perc); cumulvalues.set(percent); con.write(key,cumulvalues); } }

the output mapper output. reducer not beingness called. help appreciated.

instead of public void reduce(text key, text value, context con)

you need utilize iterable .

public void reduce(text key, iterable< text> value, context con)

hadoop reducers

Comments

Popular posts from this blog

xslt - DocBook 5 to PDF transform failing with error: "fo:flow" is missing child elements. Required content model: marker* -

mediawiki - How do I insert tables inside infoboxes on Wikia pages? -

Local Service User Logged into Windows -