Answer a question

I have a 3 dimensional numpy array, with shape Nx64x64. I would like to downsample it across dimensions 1 and 2 by taking the mean, resulting in a new array with shape Nx8x8.

I have a couple of working implementations, but I feel like there must be a neater way of doing it.

I initially tried to use np.split:

def subsample(inparray, n):
    inp = inparray.copy()
    res = np.moveaxis(np.array(np.hsplit(inp, inp.shape[1]/n)), 1, 0)
    res = np.moveaxis(np.array(np.split(res, inp.shape[2]/n, axis=3)), 1, 0)
    res = np.mean(res, axis=(3,4))
    return res

I also tried using plain indexing:

def subsample2(inparray, n):
    res = np.zeros((inparray.shape[0], n, n))
    lin = np.linspace(0, inparray.shape[1], n+1).astype(int)
    bounds = np.stack((lin[:-1], lin[1:]), axis=-1)

    for i, b in enumerate(bounds):
        for j, b2 in enumerate(bounds):
            res[:, i, j] = np.mean(inparray[:, b[0]:b[1], b2[0]:b2[1]], axis=(1,2))
    return res

I had wondered about using itertools.groupby, but it also looked quite involved.

Does anyone know of a clean solution?

Answers

There is a neat solution in form of the function block_reduce in the scikit-image module (link to docs).

It has a very simple interface to downsample arrays by applying a function such as numpy.mean. The downsampling can be done by different factors for different axes by supplying a tuple with different sizes for the blocks. Here's an example with a 2D array; downsampling only axis 1 by 5 using the mean:

import numpy as np
from skimage.measure import block_reduce

arr = np.stack((np.arange(1,20), np.arange(20,39)))

# array([[ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
#        [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38]])

arr_reduced = block_reduce(arr, block_size=(1,5), func=np.mean, cval=np.mean(arr))

# array([[ 3. ,  8. , 13. , 17.8],
#        [22. , 27. , 32. , 33. ]])
Logo

Python社区为您提供最前沿的新闻资讯和知识内容

更多推荐