如何分析数据,并把它放在一个星火SQL表

问题描述 投票:1回答:1

我有,我想用星火SQL来分析日志文件。日志文件的格式是这样的:

71.19.157.174 - - [24/Sep/2014:22:26:12 +0000] "GET /error HTTP/1.1" 404 505 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.94 Safari/537.36"

我有我可以用它来解析数据正则表达式模式:

Pattern.compile("""^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] \"(\S+) (\S+) (\S+)\" (\d{3}) (\d+)""")

此外,我还创建了案例类:

case class LogSchema(ip: String, client: String, userid: String, date: String, method: String, endpoint: String, protocol: String, response: String, contentsize: String)

不过,我无法将其转换成上,我可以运行火花SQL查询的表。

我如何使用正则表达式来分析数据,并把它放在一个表?

scala apache-spark apache-spark-sql
1个回答
4
投票

假设你有在/home/user/logs/log.txt你的日志文件,那么你可以使用下面的逻辑来从日志文件中table / dataframe

val rdd = sc.textFile("/home/user/logs/log.txt")
val pattern = Pattern.compile("""^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] \"(\S+) (\S+) (\S+)\" (\d{3}) (\d+)""")
val df = rdd.map(line => pattern.matcher(line)).map(elem => {
  elem.find
  LogSchema(elem.group(1), elem.group(2), elem.group(3), elem.group(4), elem.group(5), elem.group(6), elem.group(7), elem.group(8), elem.group(9))
}).toDF()
df.show(false)

你应该有以下dataframe

+-------------+------+------+--------------------------+------+--------+--------+--------+-----------+
|ip           |client|userid|date                      |method|endpoint|protocol|response|contentsize|
+-------------+------+------+--------------------------+------+--------+--------+--------+-----------+
|71.19.157.174|-     |-     |24/Sep/2014:22:26:12 +0000|GET   |/error  |HTTP/1.1|404     |505        |
+-------------+------+------+--------------------------+------+--------+--------+--------+-----------+

我用您所提供的case class

case class LogSchema(ip: String, client: String, userid: String, date: String, method: String, endpoint: String, protocol: String, response: String, contentsize: String)
© www.soinside.com 2019 - 2024. All rights reserved.