1. multiprocessing  Pool 的使用

import multiprocessing

def func(args):
    # do something
    return df #返回一个df

if __name__ == "__main__":

    pool = multiprocessing.Pool()
    result=pd.DataFrame()

    result=result.append(pool.map(func,list) ) #给func传入一个list
    pool.close()
    pool.join()

2.  利用multiprocessing  Pool  map对 pandas groupby apply加速,一个参数版本

from multiprocessing import Pool

def processParallel(group):
    df,name=group

    # do something
    return df

if __name__ == '__main__':
    pool = Pool(50)
    for i in range(0,9):
        result = pd.concat(pool.map(processParallel, [(group,name) for name, group in scd.groupby('col')]),ignore_index=True)
    pool.close()
    pool.join()

3. 利用multiprocessing  Pool  map对 pandas groupby apply加速,传入多个参数

注意:把需要迭代参数的放在第一个

from multiprocessing import Pool
from functools import partial

def processParallel(group,a,b):
    df,name=group

    # do something
    return df

if __name__ == '__main__':
    pool = Pool(50)
    partialprocessParallel=partial(processParallel, a=1,b=2)#利用partial传入固定参数a,b
    for i in range(0,9):
        result = pd.concat(pool.map(partialprocessParallel, [(group,name) for name, group in scd.groupby('col')]),ignore_index=True)
    pool.close()
    pool.join()

效果比joblib 的Parallel, delayed要好

Logo

为开发者提供学习成长、分享交流、生态实践、资源工具等服务,帮助开发者快速成长。

更多推荐