我正在尝试运行下面的 Hadoop mapreduce 程序。
public static class MovieFilterMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
private Text movieId = new Text();
private IntWritable one = new IntWritable(1);
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String[] columns = value.toString().split(",");
if (columns.length >= 8) {
double popularity = Double.parseDouble(columns[5]);
double voteAverage = Double.parseDouble(columns[6]);
double voteCount = Double.parseDouble(columns[7]);
if (popularity > 500.0 && voteAverage > 8.0 && voteCount > 10000.0) {
movieId.set(columns[1]); // Assuming 'id' column contains movie IDs
context.write(movieId, one);
}
}
}
}
public static class MovieCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable value : values) {
sum += value.get();
}
context.write(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "Movie Analysis");
job.setJarByClass(MovieAnalysis.class);
job.setMapperClass(MovieFilterMapper.class);
job.setReducerClass(MovieCountReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
但是运行代码时我收到如下错误
您的代码在字符串“popularity”上使用
parseDouble
如果您正在解析 CSV 文件,那么 Mapreduce 不会自动跳过列标题...如果您使用 Hive 或 SparkSQL 而不是 MapReduce,那么它可以