我有一个大的(按行数)纯文本文件,我想将其拆分为更小的文件,也按行数。因此,如果我的文件有大约 2M 行,我想将其拆分为 10 个包含 200k 行的文件,或 100 个包含 20k 行的文件(加上一个包含余数的文件;能否被整除并不重要)。
我可以在 Python 中相当轻松地完成此操作,但我想知道是否有任何忍者方法可以使用 Bash 和 Unix 实用程序来完成此操作(而不是手动循环和计数/分区行)。
看看 split 命令:
版本:
(GNU coreutils) 8.32
$ split --help
Usage: split [OPTION]... [FILE [PREFIX]]
Output pieces of FILE to PREFIXaa, PREFIXab, ...;
default size is 1000 lines, and default PREFIX is 'x'.
With no FILE, or when FILE is -, read standard input.
Mandatory arguments to long options are mandatory for short options too.
-a, --suffix-length=N generate suffixes of length N (default 2)
--additional-suffix=SUFFIX append an additional SUFFIX to file names
-b, --bytes=SIZE put SIZE bytes per output file
-C, --line-bytes=SIZE put at most SIZE bytes of records per output file
-d use numeric suffixes starting at 0, not alphabetic
--numeric-suffixes[=FROM] same as -d, but allow setting the start value
-x use hex suffixes starting at 0, not alphabetic
--hex-suffixes[=FROM] same as -x, but allow setting the start value
-e, --elide-empty-files do not generate empty output files with '-n'
--filter=COMMAND write to shell COMMAND; file name is $FILE
-l, --lines=NUMBER put NUMBER lines/records per output file
-n, --number=CHUNKS generate CHUNKS output files; see explanation below
-t, --separator=SEP use SEP instead of newline as the record separator;
'\0' (zero) specifies the NUL character
-u, --unbuffered immediately copy input to output with '-n r/...'
--verbose print a diagnostic just before each
output file is opened
--help display this help and exit
--version output version information and exit
The SIZE argument is an integer and optional unit (example: 10K is 10*1024).
Units are K,M,G,T,P,E,Z,Y (powers of 1024) or KB,MB,... (powers of 1000).
Binary prefixes can be used, too: KiB=K, MiB=M, and so on.
CHUNKS may be:
N split into N files based on size of input
K/N output Kth of N to stdout
l/N split into N files without splitting lines/records
l/K/N output Kth of N to stdout without splitting lines/records
r/N like 'l' but use round robin distribution
r/K/N likewise but only output Kth of N to stdout
GNU coreutils online help: <https://www.gnu.org/software/coreutils/>
Full documentation <https://www.gnu.org/software/coreutils/split>
or available locally via: info '(coreutils) split invocation'
$
你可以这样做:
split -l 200000 filename
这将创建每个包含 200000 行的文件,名为
xaa xab xac
...
另一个选项,按输出文件的大小分割(仍然按换行符分割):
split -C 20m --numeric-suffixes input_filename output_prefix
创建类似
output_prefix01 output_prefix02 output_prefix03 ...
的文件,每个文件的最大大小为 20 兆字节。
使用split命令:
split -l 200000 mybigfile.txt
是的,有一个
split
命令。它将按行或字节分割文件。
$ split --help
Usage: split [OPTION]... [INPUT [PREFIX]]
Output fixed-size pieces of INPUT to PREFIXaa, PREFIXab, ...; default
size is 1000 lines, and default PREFIX is `x'. With no INPUT, or when INPUT
is -, read standard input.
Mandatory arguments to long options are mandatory for short options too.
-a, --suffix-length=N use suffixes of length N (default 2)
-b, --bytes=SIZE put SIZE bytes per output file
-C, --line-bytes=SIZE put at most SIZE bytes of lines per output file
-d, --numeric-suffixes use numeric suffixes instead of alphabetic
-l, --lines=NUMBER put NUMBER lines per output file
--verbose print a diagnostic just before each
output file is opened
--help display this help and exit
--version output version information and exit
SIZE may have a multiplier suffix:
b 512, kB 1000, K 1024, MB 1000*1000, M 1024*1024,
GB 1000*1000*1000, G 1024*1024*1024, and so on for T, P, E, Z, Y.
split <file> -l 1000
split <file> -b 10M
cat x* > <file>
split -l 10 filename
split -n 5 filename
split -b 512 filename
split -C 512 filename
将文件“file.txt”拆分为 10,000 行文件:
split -l 10000 file.txt
split
:
将文件分割成固定大小的块,创建包含连续 INPUT 部分的输出文件(如果未给出或 INPUT 为“-”,则为标准输入)
Syntax split [options] [INPUT [PREFIX]]
split
(来自 GNU coreutils,自 2010-12-22 版本 8.8 起)包含以下参数:
-n, --number=CHUNKS generate CHUNKS output files; see explanation below
CHUNKS may be:
N split into N files based on size of input
K/N output Kth of N to stdout
l/N split into N files without splitting lines/records
l/K/N output Kth of N to stdout without splitting lines/records
r/N like 'l' but use round robin distribution
r/K/N likewise but only output Kth of N to stdout
因此,
split -n 4 input output.
将生成四个字节数相同的文件(
output.a{a,b,c,d}
),但中间可能会断行。如果我们想保留整行(即按行分割),那么这应该可行:
split -n l/4 input output.
相关答案:
sed -n '1,100p' filename > output.txt
这里,1 和 100 是您将在
output.txt
中捕获的行号。
split
的给出答案就可以了。但是,我很好奇为什么没有人关注这些要求:
split -l $(expr `wc $filename | cut -d ' ' -f3` / $chunks) $filename
这可以轻松添加到您的 .bashrc 文件函数中,因此您只需调用它,传递文件名和块即可:
split -l $(expr `wc $1 | cut -d ' ' -f3` / $2) $1
如果您只需要 x 个块而无需额外文件中的余数,只需调整公式以对每个文件求和 (块 - 1)。我确实使用这种方法,因为通常我只想要 x 个文件而不是每个文件 x 行:
split -l $(expr `wc $1 | cut -d ' ' -f3` / $2 + `expr $2 - 1`) $1
您可以将其添加到脚本中并将其称为您的“忍者之道”,因为如果没有任何东西适合您的需求,您可以构建它:-)
split -l 200 --numeric-suffixes --additional-suffix=".txt" toSplit.txt splited
此方法会导致换行:
split -b 125m compact.file -d -a 3 compact_prefix
我尝试将每个文件合并并拆分为大约 128 MB。
# Split into 128 MB, and judge sizeunit is M or G. Please test before use.
begainsize=`hdfs dfs -du -s -h /externaldata/$table_name/$date/ | awk '{ print $1}' `
sizeunit=`hdfs dfs -du -s -h /externaldata/$table_name/$date/ | awk '{ print $2}' `
if [ $sizeunit = "G" ];then
res=$(printf "%.f" `echo "scale=5;$begainsize*8 "|bc`)
else
res=$(printf "%.f" `echo "scale=5;$begainsize/128 "|bc`) # Celling ref http://blog.csdn.net/naiveloafer/article/details/8783518
fi
echo $res
# Split into $res files with a number suffix. Ref: http://blog.csdn.net/microzone/article/details/52839598
compact_file_name=$compact_file"_"
echo "compact_file_name: "$compact_file_name
split -n l/$res $basedir/$compact_file -d -a 3 $basedir/${compact_file_name}