Answer a question

I have large CSVs where I'm only interested in a subset of the rows. In particular, I'd like to read in all the rows which occur before a particular condition is met.

For example, if read_csv would yield the dataframe:

     A    B      C
1   34   3.20   'b'
2   24   9.21   'b'
3   34   3.32   'c'
4   24   24.3   'c'
5   35   1.12   'a'
... 
1e9 42   2.15   'd'

is there some way to read all the rows in the csv until col B exceeds 10. In the above example, I'd like to read in:

     A    B      C
1   34   3.20   'b'
2   24   9.21   'b'
3   34   3.32   'c'
4   24   24.3   'c'

I know how to throw these rows out once I've read the dataframe in, but at this point I've already spent all that computation reading them in. I do not have access to the index of the final row before reading the csv (no skipfooter please)

Answers

You could read the csv in chunks. Since pd.read_csv will return an iterator when the chunksize parameter is specified, you can use itertools.takewhile to read only as many chunks as you need, without reading the whole file.

import itertools as IT
import pandas as pd

chunksize = 10 ** 5
chunks = pd.read_csv(filename, chunksize=chunksize, header=None)
chunks = IT.takewhile(lambda chunk: chunk['B'].iloc[-1] < 10, chunks)
df = pd.concat(chunks)
mask = df['B'] < 10
df = df.loc[mask]

Or, to avoid having to use df.loc[mask] to remove unwanted rows from the last chunk, perhaps a cleaner solution would be to define a custom generator:

import itertools as IT
import pandas as pd

def valid(chunks):
    for chunk in chunks:
        mask = chunk['B'] < 10
        if mask.all():
            yield chunk
        else:
            yield chunk.loc[mask]
            break

chunksize = 10 ** 5
chunks = pd.read_csv(filename, chunksize=chunksize, header=None)
df = pd.concat(valid(chunks))
Logo

Python社区为您提供最前沿的新闻资讯和知识内容

更多推荐