如何在将连续日志文件复制和处理到另一个文件时停止将行附加到上一行

问题描述 投票:0回答:1

我正在尝试将用户名附加到正在连续写入的日志文件中相应的 ip 地址。但是新行被附加到之前的行中,使日志文件无法分析。

注意: 它是一个不断写入的 Web 服务器日志文件,我的代码检查日志中捕获的 ip,找到相应的用户名并将用户名插入循环日志中该特定行的开头。在第一次运行中没有错误,但从第二次运行开始,线路变得混乱,如下所示。

for i in $ips
do
...
..
cp $server_log $log_file
sed -i "/^$i/ s/./$user &/" $log_file
cp $log_file $server_log
...
...
done

输入文件

10.xx.xx.xxx -[12/Feb/2023 02:46:23] "GET /folder/ HTTP/1.1" 200 -
10.xx.xx.xxx -[12/Feb/2023 02:46:44] "GET /folder/ HTTP/1.1" 200 -
10.xx.xx.56 -[12/Feb/2023 02:47:20] "GET /folder2/HTTP/1.1" 200 -

输出

user1 10.xx.xx.xxx -[12/Feb/2023 02:46:23] "GET /folder/ HTTP/1.1" 200 -
user1 10.xx.xx.xxx -[12/Feb/2023 02:46:44] "GET /folder/ HT10.xx.xx.56 -[12/Feb/2023 02:47:20] "GET /folder2/HTTP/1.1" 200 -

预期产出

user1 10.xx.xx.34 -[12/Feb/2023 02:46:23] "GET /folder/ HTTP/1.1" 200 -
user1 10.xx.xx.34 -[12/Feb/2023 02:46:44] "GET /folder/ HTTP/1.1" 200 -
user2 10.xx.xx.56 -[12/Feb/2023 02:47:20] "GET /folder2/HTTP/1.1" 200 -
bash shell unix awk text-processing
1个回答
0
投票
$ cat ips_file
10.xx.xx.101 user1
10.xx.xx.102 user2
10.xx.xx.103 user3

$ cat logfile 
10.xx.xx.101 -[12/Feb/2023 02:46:23] "GET /folder1/ HTTP/1.1" 200 -
10.xx.xx.101 -[12/Feb/2023 02:46:44] "GET /folder1/ HTTP/1.1" 200 -
10.xx.xx.102 -[12/Feb/2023 02:47:20] "GET /folder2/ HTTP/1.1" 200 -
10.xx.xx.101 -[12/Feb/2023 02:46:44] "GET /folder1/ HTTP/1.1" 200 -
10.xx.xx.103 -[12/Feb/2023 02:46:44] "GET /folder3/ HTTP/1.1" 200 -

脚本

awk -i inplace '
    NR==FNR{
        userip[$1]=$2
        next 
    }
    ($1 in userip){ $0 = userip[$1] " " $0 }
'1 inplace::enable=0 ips_file  inplace::enable=1 logfile

输出

$ cat logfile 
user1 10.xx.xx.101 -[12/Feb/2023 02:46:23] "GET /folder1/ HTTP/1.1" 200 -
user1 10.xx.xx.101 -[12/Feb/2023 02:46:44] "GET /folder1/ HTTP/1.1" 200 -
user2 10.xx.xx.102 -[12/Feb/2023 02:47:20] "GET /folder2/ HTTP/1.1" 200 -
user1 10.xx.xx.101 -[12/Feb/2023 02:46:44] "GET /folder1/ HTTP/1.1" 200 -
user3 10.xx.xx.103 -[12/Feb/2023 02:46:44] "GET /folder3/ HTTP/1.1" 200 -
© www.soinside.com 2019 - 2024. All rights reserved.