问题描述
我有一个 gcs 文件夹,如下所示:
I have a gcs folder as below:
gs://<bucket-name>/<folder-name>/dt=2017-12-01/part-0000.tsv
/dt=2017-12-02/part-0000.tsv
/dt=2017-12-03/part-0000.tsv
/dt=2017-12-04/part-0000.tsv
...
我只想匹配 dt=2017-12-02
和 dt=2017-12-03
下的文件,使用 sc.textFile()
在 Scio 中,据我所知,它在下面使用 TextIO.Read.from()
.
I want to match only the files under dt=2017-12-02
and dt=2017-12-03
using sc.textFile()
in Scio, which uses TextIO.Read.from()
underneath as far as I know.
我试过了
gs://<bucket-name>/<folder-name>/dt={2017-12-02,2017-12-03}/*.tsv
和
gs://<bucket-name>/<folder-name>/dt=2017-12-(02|03)/*.tsv
都匹配零个文件:
INFO org.apache.beam.sdk.io.FileBasedSource - Filepattern gs://<bucket-name>/<folder-name>/dt={2017-12-02,2017-12-03}/*.tsv matched 0 files with total size 0
INFO org.apache.beam.sdk.io.FileBasedSource - Filepattern gs://<bucket-name>/<folder-name>/dt=2017-12-(02|03)/*.tsv matched 0 files with total size 0
执行此操作的有效文件模式应该是什么?
What should be the valid filepattern on doing this?
推荐答案
您需要使用 TextIO.readAll()
转换来读取 PCollection
文件模式.通过 Create.of()
显式创建文件模式集合,或者您可以使用 ParDo
计算它.
You need to use the TextIO.readAll()
transform that reads a PCollection<String>
of filepatterns. Create the collection of filepatterns either explicitly via Create.of()
or you can compute it using a ParDo
.
case class ReadPaths(paths: java.lang.Iterable[String]) extends PTransform[PBegin, PCollection[String]] {
override def expand(input: PBegin) = {
Create.of(paths).expand(input).apply(TextIO.readAll())
}
}
val paths = Seq(
"gs://<bucket-name>/<folder-name>/dt=2017-07-01/part-0000.tsv",
"gs://<bucket-name>/<folder-name>/dt=2017-12-20/part-0000.tsv",
"gs://<bucket-name>/<folder-name>/dt=2018-03-29/part-0000.tsv",
"gs://<bucket-name>/<folder-name>/dt=2018-05-04/part-0000.tsv"
)
import scala.collection.JavaConverters._
sc.customInput("Read Paths", ReadPaths(paths.asJava))
这篇关于如何在 Cloud Dataflow 中使用 TextIO.Read 匹配具有名称的多个文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!