为什么写入大于Java的PipedInputStream缓冲区大小的值会导致它无限期挂起?

问题描述 投票:0回答:0

这段 Scala 代码无限期挂起

import java.io._
import scala.io.Source
import scala.concurrent._
import scala.concurrent.ExecutionContext.Implicits.global

def copy(inputStream: InputStream, outputStream: OutputStream): Unit = {
  val buffer = new Array[Byte](1024)
  var bytesRead = inputStream.read(buffer)
  while (bytesRead != -1) {
    outputStream.write(buffer, 0, bytesRead)
    bytesRead = inputStream.read(buffer)
  }
  outputStream.flush()
}

val os0 = new PipedOutputStream
val is0 = new PipedInputStream(os0)

val os1 = new PipedOutputStream
val is1 = new PipedInputStream(os1)

// write to os0
Future {
  os0.write(("foobarbaz"*1000).getBytes("utf-8"))
  os0.close()
}

// copy is0 to os1
Future {
  copy(is0, os1)
  os1.close()
}

// read is1 to output
val result = Source.fromInputStream(is1).mkString
println(result)

但是如果我将输入数据修改为小于

PipedInputStream
的 1024B 缓冲区大小,它就可以工作,即与

  os0.write(("foobarbaz").getBytes("utf-8"))

打印成功

> foobarbaz

这里发生了什么?管道流实现中是否存在一些死锁?

Scastie 链接:https://scastie.scala-lang.org/131qJRP9Sc6wSVXrqODd2A

java scala io pipe deadlock
© www.soinside.com 2019 - 2024. All rights reserved.