如何每次在python或pyspark中从csv读取10条记录?

问题描述 投票:0回答:2

我有一个具有100,000行的csv文件,我想一次读取10行,并处理每一行以每次保存到其各自的文件中,并休眠5秒钟。我正在尝试Nslice,但它只会读取前10个并停止。我希望程序运行到EOF。如果有任何帮助,我正在使用jupyter,python2和pyspark。

from itertools import islice
with open("per-vehicle-records-2020-01-31.csv") as f:
    while True:
        next_n_lines = list(islice(f, 10))
        if not next_n_lines:
            break
        else:
            print(next_n_lines)
            sleep(5)

这不会分隔每一行。它将10行合并为一个列表]

['"cosit","year","month","day","hour","minute","second","millisecond","minuteofday","lane","lanename","straddlelane","straddlelanename","class","classname","length","headway","gap","speed","weight","temperature","duration","validitycode","numberofaxles","axleweights","axlespacings"\n', '"000000000997","2020","1","31","1","30","2","0","90","1","Test1","0","","5","HGV_RIG","11.4","2.88","3.24","70.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","3","0","90","2","Test2","0","","2","CAR","5.2","3.17","2.92","71.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","5","0","90","1","Test1","0","","2","CAR","5.1","2.85","2.51","70.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","6","0","90","2","Test2","0","","2","CAR","5.1","3.0","2.94","69.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","9","0","90","1","Test1","0","","5","HGV_RIG","11.5","3.45","3.74","70.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","10","0","90","2","Test2","0","","2","CAR","5.4","3.32","3.43","71.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","13","0","90","2","Test2","0","","2","CAR","5.3","3.19","3.23","71.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","13","0","90","1","Test1","0","","2","CAR","5.2","3.45","3.21","70.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","16","0","90","1","Test1","0","","5","HGV_RIG","11.0","2.9","3.13","69.0","0.0","0.0","0","0","0","",""\n']
python pyspark bigdata
2个回答
0
投票

为了一次读取10行,您可以在read_csv中指定一个名为chunksize的参数:

df = pd.read_csv(path,chunksize=10) 

0
投票

[islice重新运行生成器,因此分配后需要迭代]

from itertools import islice
with open("per-vehicle-records-2020-01-31.csv") as f:
    while True:
        next_n_lines = list(islice(f, 10))
        if not next_n_lines:
            break
        else:
            for line in next_n_lines:
               print(line)
            sleep(5)

您在How to read file N lines at a time in Python?了解更多信息

© www.soinside.com 2019 - 2024. All rights reserved.