我正在尝试对我有纬度/经度和一个静态 geojson 文件的数据进行空间操作。 现在我需要加载 geojson 并从 DF 纬度/经度查找每一行,如果它们属于使用交集的哪个位置。
源数据为
1111:150458,025.22826N,055.30022E,348,39,JOB_ONBOARD
2222:150448,025.22746N,055.29962E,32,48, CAR_AVAILABLE
3333,20072023:150612,025.30559N,055.38272E,130,50,CAR_AVAILABLE
4444,20072023:150740,025.21794N,055.28569E,0,0,JOB_ONBOARD
我尝试遵循 Apache Sedona 文档但没有成功。
请指导我继续。谢谢你
val spark: SparkSession = SparkSession.builder()
.appName("test")
.config("spark.master", "local[*]")
.config("spark.serializer", classOf[KryoSerializer].getName)
.config("spark.kryo.registrator", classOf[SedonaKryoRegistrator].getName)
.getOrCreate()
SedonaSQLRegistrator.registerAll(spark)
val inputLocation = "C:\\communities_0.geojson"
val schema = "type string, crs string, totalFeatures long, features array<struct<type string, geometry string, properties map<string, string>>>"
spark.read.schema(schema).json(inputLocation)
.selectExpr("explode(features) as features") // Explode the envelope to get one feature per row.
.select("features.*") // Unpack the features struct.
.withColumn("geometry", expr("ST_GeomFromGeoJSON(geometry)")) // Convert the geometry string.
.printSchema()
最终结果 DF 应如下所示
1111:150458,025.22826N,055.30022E,348,39,JOB_ONBOARD, community_A
2222:150448,025.22746N,055.29962E,32,48, CAR_AVAILABLE, community_B
3333,20072023:150612,025.30559N,055.38272E,130,50,CAR_AVAILABLE, community_C
4444,20072023:150740,025.21794N,055.28569E,0,0,JOB_ONBOARD, community_D
您的源数据不是 GeoJSON 或任何典型的地理空间格式。请考虑首先删除字母 N 和 E,将数据清理为以下格式。
1111:150458,025.22826,055.30022,348,39,JOB_ONBOARD
2222:150448,025.22746,055.29962,32,48, CAR_AVAILABLE
3333,20072023:150612,025.30559,055.38272,130,50,CAR_AVAILABLE
4444,20072023:150740,025.21794,055.28569,0,0,JOB_ONBOARD
然后您可以使用 Sedona ST_Point 创建几何列:https://sedona.apache.org/1.4.1/api/sql/Constructor/#st_point