Answer a question

I would like to read several csv files from a directory into pandas and concatenate them into one big DataFrame. I have not been able to figure it out though. Here is what I have so far:

import glob
import pandas as pd

# get data file names
path =r'C:\DRO\DCL_rawdata_files'
filenames = glob.glob(path + "/*.csv")

dfs = []
for filename in filenames:
    dfs.append(pd.read_csv(filename))

# Concatenate all data into one DataFrame
big_frame = pd.concat(dfs, ignore_index=True)

I guess I need some help within the for loop???

Answers

See pandas: IO tools for all of the available .read_ methods.

If you have same columns in all your csv files then you can try the code below. I have added header=0 so that after reading csv first row can be assigned as the column names.

import pandas as pd
import glob
import os

path = r'C:\DRO\DCL_rawdata_files' # use your path
all_files = glob.glob(os.path.join(path , "/*.csv"))

li = []

for filename in all_files:
    df = pd.read_csv(filename, index_col=None, header=0)
    li.append(df)

frame = pd.concat(li, axis=0, ignore_index=True)

or, with attribution to a comment from Sid

all_files = glob.glob(os.path.join(path, "*.csv"))

df = pd.concat((pd.read_csv(f) for f in all_files), ignore_index=True)

  • It's often necessary to identify each sample of data, which can be accomplished by adding a new column to the dataframe.
  • pathlib from the standard library will be used for this example. It treats paths as objects with methods, instead of strings to be sliced.

Imports and Setup

from pathlib import Path
import pandas as pd
import numpy as np

path = r'C:\DRO\DCL_rawdata_files'  # or unix / linux / mac path

# get the files from the path provided in the OP
files = Path(path).glob('*.csv')  # .rglob to get subdirectories

Option 1:

  • Add a new column with the file name
dfs = list()
for f in files:
    data = pd.read_csv(f)
    # .stem is method for pathlib objects to get the filename w/o the extension
    data['file'] = f.stem
    dfs.append(data)
    
df = pd.concat(dfs, ignore_index=True)

Option 2:

  • Add a new column with a generic name using enumerate
dfs = list()
for i, f in enumerate(files):
    data = pd.read_csv(f)
    data['file'] = f'File {i}'
    dfs.append(data)
    
df = pd.concat(dfs, ignore_index=True)

Option 3:

  • Create the dataframes with a list comprehension, and then use np.repeat to add a new column.
    • [f'S{i}' for i in range(len(dfs))] creates a list of strings to name each dataframe.
    • [len(df) for df in dfs] creates a list of lengths
  • Attribution for this option goes to this plotting answer.
# read the files into dataframes
dfs = [pd.read_csv(f) for f in files]

# combine the list of dataframes
df = pd.concat(dfs, ignore_index=True)

# add a new column
df['Source'] = np.repeat([f'S{i}' for i in range(len(dfs))], [len(df) for df in dfs])

Option 4:

  • One liners using .assign to create the new column, with attribution to a comment from C8H10N4O2
df = pd.concat((pd.read_csv(f).assign(filename=f.stem) for f in files), ignore_index=True)

or

df = pd.concat((pd.read_csv(f).assign(Source=f'S{i}') for i, f in enumerate(files)), ignore_index=True)
Logo

Python社区为您提供最前沿的新闻资讯和知识内容

更多推荐