Sparklyr分割字符串(字符串)

问题描述 投票:1回答:1

试图分裂在sparklyr一个字符串,然后用它来连接/过滤

我试图标记化的字符串,然后将其分离到新列的建议的方法。这里是一个重复的例子(注意,我有我的翻译是NA变成copy_to实际NA后的字符串“NA”,有没有做到这一点的方法)

x <- data.frame(Id=c(1,2,3,4),A=c('A-B','A-C','A-D',NA))
df <- copy_to(sc,x,'df')

df %>%  mutate(A = ifelse(A=='NA',NA,A)) %>% ft_regex_tokenizer(input.col="A", output.col="B", pattern="-",to_lower_case=F) %>% 
    sdf_separate_column("B", into=c("C", "D")) %>% filter(C=='A') 

问题是,如果我尝试在新创建的列过滤器(例如%>% filter(C=='A')或加入他们我得到一个错误,请参阅以下

Error : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 367.0 failed 4 times, most recent failure: Lost task 0.3 in stage 367.0 (TID 5062, 10.139.64.4, executor 0): org.apache.spark.SparkException: Failed to execute user defined function($anonfun$createTransformFunc$2: (string) => array<string>)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$11$$anon$1.hasNext(WholeStageCodegenExec.scala:622)
    at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.encodeUnsafeRows(UnsafeRowBatchUtils.scala:51)
    at org.apache.spark.sql.execution.collect.Collector$$anonfun$2.apply(Collector.scala:148)
    at org.apache.spark.sql.execution.collect.Collector$$anonfun$2.apply(Collector.scala:147)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.doRunTask(Task.scala:139)
    at org.apache.spark.scheduler.Task.run(Task.scala:112)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$13.apply(Executor.scala:497)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1432)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:503)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
    at java.util.regex.Matcher.getTextLength(Matcher.java:1283)
    at java.util.regex.Matcher.reset(Matcher.java:309)
    at java.util.regex.Matcher.<init>(Matcher.java:229)
    at java.util.regex.Pattern.matcher(Pattern.java:1093)
    at java.util.regex.Pattern.split(Pattern.java:1206)
    at java.util.regex.Pattern.split(Pattern.java:1273)
    at scala.util.matching.Regex.split(Regex.scala:526)
    at org.apache.spark.ml.feature.RegexTokenizer$$anonfun$createTransformFunc$2.apply(Tokenizer.scala:144)
    at org.apache.spark.ml.feature.RegexTokenizer$$anonfun$createTransformFunc$2.apply(Tokenizer.scala:141)
    ... 15 more

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:2100)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2088)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2087)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2087)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:1076)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:1076)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1076)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2319)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2267)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2255)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:873)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2252)
    at org.apache.spark.sql.execution.collect.Collector.runSparkJobs(Collector.scala:259)
    at org.apache.spark.sql.execution.collect.Collector.collect(Collector.scala:269)
    at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:69)
    at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:75)
    at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:497)
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollectResult(limit.scala:48)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectResult(Dataset.scala:2827)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3439)
    at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2794)
    at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2794)
    at org.apache.spark.sql.Dataset$$anonfun$54.apply(Dataset.scala:3423)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withCustomExecutionEnv$1.apply(SQLExecution.scala:99)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:228)
    at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:85)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:158)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withAction(Dataset.scala:3422)
    at org.apache.spark.sql.Dataset.collect(Dataset.scala:2794)
    at sparklyr.Utils$.collect(utils.scala:200)
    at sparklyr.Utils.collect(utils.scala)
    at sun.reflect.GeneratedMethodAccessor577.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sparklyr.Invoke.invoke(invoke.scala:139)
    at sparklyr.StreamHandler.handleMethodCall(stream.scala:123)
    at sparklyr.StreamHandler.read(stream.scala:66)
    at sparklyr.BackendHandler.channelRead0(handler.scala:51)
    at sparklyr.BackendHandler.channelRead0(handler.scala:4)
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkExcepti
In addition: Warning messages:
1: The parameter `input.col` is deprecated and will be removed in a future release. Please use `input_col` instead. 
2: The parameter `output.col` is deprecated and will be removed in a future release. Please use `output_col` instead

不知道为什么作为创建的列的类型是根据sdf_schema“StringType”。

是否有使用sparklyr真正分开,我以后可以用作为字符串列的解决方案,而无需编写出来的文件框架,或具有收集到驾驶员的节点?

r apache-spark apache-spark-ml sparklyr
1个回答
2
投票

用放电ML变压器是不是在这里一个不错的选择。相反,你应该split功能:

df %>% 
  mutate(B = split(A, "-")) %>% 
  sdf_separate_column("B", into = c("C", "D")) %>%
  filter(C %IS NOT DISTINCT FROM% "A") 
# Source: spark<?> [?? x 5]
     Id A     B          C     D    
  <dbl> <chr> <list>     <chr> <chr>
1     1 A-B   <list [2]> A     B    
2     2 A-C   <list [2]> A     C    
3     3 A-D   <list [2]> A     D  

regexp_extract

pattern <- "^(.*)-(.*)$"

df %>% 
   mutate(
     C = regexp_extract(A, pattern, 1),
     D = regexp_extract(A, pattern, 2)
   ) %>%
   filter(C %IS NOT DISTINCT FROM% "A") 
# Source: spark<?> [?? x 4]
     Id A     C     D    
  <dbl> <chr> <chr> <chr>
1     1 A-B   A     B    
2     2 A-C   A     C    
3     3 A-D   A     D    

不过,如果你想使RegexpTokenzier工作,你必须把手NULL的(外部R型NA)第一。例如,它可以与coalesce完成

tokenizer <- ft_regex_tokenizer(
  sc, input_col = "A", output_col = "B",
  pattern = "-", to_lower_case = F
)

df %>%  
  mutate(A = coalesce(A, "")) %>% 
  ml_transform(tokenizer, .) %>%
  sdf_separate_column("B", into=c("C", "D")) %>%
  filter(C %IS NOT DISTINCT FROM% "A")
# Source: spark<?> [?? x 5]
     Id A     B          C     D    
  <dbl> <chr> <list>     <chr> <chr>
1     1 A-B   <list [2]> A     B    
2     2 A-C   <list [2]> A     C    
3     3 A-D   <list [2]> A     D    

或者通过首先除去丢失数据:

df %>%  
  # or filter(!is.na(A))
  na.omit(columns=c("A")) %>%                      
  ml_transform(tokenizer, .) %>%
  sdf_separate_column("B", into=c("C", "D")) %>%
  filter(C %IS NOT DISTINCT FROM% "A")
* Dropped 1 rows with 'na.omit' (4 => 3)
# Source: spark<?> [?? x 5]
     Id A     B          C     D    
  <dbl> <chr> <list>     <chr> <chr>
1     1 A-B   <list [2]> A     B    
2     2 A-C   <list [2]> A     C    
3     3 A-D   <list [2]> A     D  
© www.soinside.com 2019 - 2024. All rights reserved.