目录

前言

知识直通车

NEON转置指令

右旋90

4x4矩阵右旋实例

灰度图(单通道)右旋90

彩图(RGB三通道)右旋90

左旋90

4x4矩阵左旋实例

灰度图(单通道)左旋90

彩图(RGB三通道)左旋90 

RK3288-linux(硬浮点) 耗时对比


前言

图像旋转就是简单的坐标变换,1080p的图像右旋90度耗时时间,在RK3288-linux(硬浮点)板子上,opencv4.1也就17ms搞定,本没啥好讲的,但是凡事追求极致嘛,能加速的就给他加速一把,当然这态度的问题只是其中一个原因;另外一个原因是最近在公司遇到一个对旋转90的加速需求,在某些场景下,需要做图像旋转然后再送入神经网络做人脸检测,而由于整个人脸识别整个工程的运行导致板子的资源极大消耗,使得简单的图像旋转的时间延长至三四十ms,这是很可怕的,人脸检测也就二三十ms左右吧,所以这是写这篇文章的直接导火索。

当然,类似的利用NEON对矩阵旋转加速的解决方案,网上已经有人发表过了,比如最早的文章https://www.cnblogs.com/hrlnw/p/3723072.html,本篇博客会在此基础上进行扩展,增加RGB彩图的加速旋转代码并补充opencv与本篇算法在RK3288上的旋转耗时数据对比。

 

知识直通车

矩阵右旋90的NEON加速:https://www.cnblogs.com/hrlnw/p/3723072.html

矩阵右旋180度和270(-90)度:https://www.cnblogs.com/hrlnw/p/3767853.html

opencv右旋90、180、270度:https://www.cnblogs.com/alexYuin/p/9493242.html

opencv利用transpose+flip实现图像旋转,虽然比仿射变换函数warpaffine要快不少,但是分步的做法总是没有一步到位的解决方案来得快的,接下来就开始介绍一步到位法了。

 

NEON转置指令

可以用neon进行旋转加速的关键要感谢arm出品的vtrn_u8/u16/u32..系列的转置指令,虽然是2x2的转置指令,但是对于矩阵旋转完全够用,取决于怎么用了。

转置的具体情况如下示意,应该很清楚就不多少了

uint8x8_t mat0,mat1;
uint8x8x2_t result=vtrn_u8(mat0,mat1);

               

 

右旋90

4x4矩阵右旋实例

在介绍图形旋转代码之前先用个小矩阵实例说明一下算法的可行性。先说右旋90度:

      旋转前矩阵                            旋转后矩阵

              

算法步骤:

1)相邻两行每2x2做转置(但是注意输入顺序是倒的,比如前两行,第二行作为vtrn的第一个参数输入,第一行作为vtrn的第二个参数输入),结果如下:

           原矩阵                               相邻行倒置                           相邻行转置                          

                             

2)每行两个像素值看做一个整体,隔行转置,依然倒着输入,即2、0行一组输入,3、1行一组输入,这样便可以得到最终的矩阵旋转结果了,过程如下:

       上一步结果                              隔行倒置                              隔行转置

                             

例子看明白了,那下面图像旋转也就很简单了,soeasy啊~

图像旋转与矩阵旋转的区别就是.....嗯.....图像比较大,下就分块喽,那从指令读取数据的角度以及代码量及速度方面的考虑我们将图像分成8x8的小块矩阵一个一个做旋转然后再放到最终图像相应的位置就可以了。

灰度图(单通道)右旋90


static void rotate_pos90_gray(uint8_t* src, size_t sstep, uint8_t* dst, size_t dstep, int w,int h)
{
    uint8x8x4_t     mat[2];
    uint8x8x2_t     temp8x8x2[4];
    uint16x4x4_t    temp16x4x4[2];
    uint16x4x2_t    temp16x4x2[4];
    uint32x2x4_t    temp32x2x4[2];
    uint32x2x2_t    temp32x2x2[4];

    int dw=w&7; int sw=w-dw;
    int dh=h&7; int sh=h-dh;
    
    for(int y=0;y<sh;y+=8)
    {
		uint8_t* pIndexY = src+y*sstep;
		int tr_x = h-y-8;
        for(int x=0;x<sw;x+=8)
        {
			uint8_t* ptr_y = dst+x*dstep;

            //load 8x8 matrix
            mat[0].val[0] = vld1_u8(pIndexY+        x);
            mat[0].val[1] = vld1_u8(pIndexY+  sstep+x);
            mat[0].val[2] = vld1_u8(pIndexY+2*sstep+x);
            mat[0].val[3] = vld1_u8(pIndexY+3*sstep+x);
            mat[1].val[0] = vld1_u8(pIndexY+4*sstep+x);
            mat[1].val[1] = vld1_u8(pIndexY+5*sstep+x);
            mat[1].val[2] = vld1_u8(pIndexY+6*sstep+x);
            mat[1].val[3] = vld1_u8(pIndexY+7*sstep+x);

            //transpose for two nearby rows 
            temp8x8x2[0] = vtrn_u8(mat[0].val[1],mat[0].val[0]); 
            temp8x8x2[1] = vtrn_u8(mat[0].val[3],mat[0].val[2]); 
            temp8x8x2[2] = vtrn_u8(mat[1].val[1],mat[1].val[0]); 
            temp8x8x2[3] = vtrn_u8(mat[1].val[3],mat[1].val[2]); 

            //interpret two 8bit to one 16bit data
            temp16x4x4[0].val[0] = vreinterpret_u16_u8(temp8x8x2[0].val[0]);
            temp16x4x4[0].val[1] = vreinterpret_u16_u8(temp8x8x2[0].val[1]);
            temp16x4x4[0].val[2] = vreinterpret_u16_u8(temp8x8x2[1].val[0]);
            temp16x4x4[0].val[3] = vreinterpret_u16_u8(temp8x8x2[1].val[1]);
            temp16x4x4[1].val[0] = vreinterpret_u16_u8(temp8x8x2[2].val[0]);
            temp16x4x4[1].val[1] = vreinterpret_u16_u8(temp8x8x2[2].val[1]);
            temp16x4x4[1].val[2] = vreinterpret_u16_u8(temp8x8x2[3].val[0]);
            temp16x4x4[1].val[3] = vreinterpret_u16_u8(temp8x8x2[3].val[1]);

            //transpose for two interleaved rows 
            temp16x4x2[0] = vtrn_u16(temp16x4x4[0].val[2],temp16x4x4[0].val[0]); 
            temp16x4x2[1] = vtrn_u16(temp16x4x4[0].val[3],temp16x4x4[0].val[1]); 
            temp16x4x2[2] = vtrn_u16(temp16x4x4[1].val[2],temp16x4x4[1].val[0]); 
            temp16x4x2[3] = vtrn_u16(temp16x4x4[1].val[3],temp16x4x4[1].val[1]); 

            temp32x2x4[0].val[0] = vreinterpret_u32_u16(temp16x4x2[0].val[0]);
            temp32x2x4[0].val[1] = vreinterpret_u32_u16(temp16x4x2[0].val[1]);
            temp32x2x4[0].val[2] = vreinterpret_u32_u16(temp16x4x2[1].val[0]);
            temp32x2x4[0].val[3] = vreinterpret_u32_u16(temp16x4x2[1].val[1]);
            temp32x2x4[1].val[0] = vreinterpret_u32_u16(temp16x4x2[2].val[0]);
            temp32x2x4[1].val[1] = vreinterpret_u32_u16(temp16x4x2[2].val[1]);
            temp32x2x4[1].val[2] = vreinterpret_u32_u16(temp16x4x2[3].val[0]);
            temp32x2x4[1].val[3] = vreinterpret_u32_u16(temp16x4x2[3].val[1]);

            temp32x2x2[0] = vtrn_u32(temp32x2x4[1].val[0],temp32x2x4[0].val[0]); 
            temp32x2x2[1] = vtrn_u32(temp32x2x4[1].val[1],temp32x2x4[0].val[1]); 
            temp32x2x2[2] = vtrn_u32(temp32x2x4[1].val[2],temp32x2x4[0].val[2]); 
            temp32x2x2[3] = vtrn_u32(temp32x2x4[1].val[3],temp32x2x4[0].val[3]); 

            mat[0].val[0] = vreinterpret_u8_u32(temp32x2x2[0].val[0]); 
            mat[0].val[1] = vreinterpret_u8_u32(temp32x2x2[2].val[0]); 
            mat[0].val[2] = vreinterpret_u8_u32(temp32x2x2[1].val[0]); 
            mat[0].val[3] = vreinterpret_u8_u32(temp32x2x2[3].val[0]); 
            mat[1].val[0] = vreinterpret_u8_u32(temp32x2x2[0].val[1]); 
            mat[1].val[1] = vreinterpret_u8_u32(temp32x2x2[2].val[1]); 
            mat[1].val[2] = vreinterpret_u8_u32(temp32x2x2[1].val[1]); 
            mat[1].val[3] = vreinterpret_u8_u32(temp32x2x2[3].val[1]); 

            vst1_u8(ptr_y        +tr_x, mat[0].val[0]);
            vst1_u8(ptr_y+  dstep+tr_x, mat[0].val[1]);
            vst1_u8(ptr_y+2*dstep+tr_x, mat[0].val[2]);
            vst1_u8(ptr_y+3*dstep+tr_x, mat[0].val[3]);
            vst1_u8(ptr_y+4*dstep+tr_x, mat[1].val[0]);
            vst1_u8(ptr_y+5*dstep+tr_x, mat[1].val[1]);
            vst1_u8(ptr_y+6*dstep+tr_x, mat[1].val[2]);
            vst1_u8(ptr_y+7*dstep+tr_x, mat[1].val[3]);
        }
    }
    
    for(int y=sh;y<h;y++)
    {
        int tr_x = h-y-1;
        for(int x=0;x<w;x++)
		{
            uint8_t* ptr_y = dst+x*dstep;
			ptr_y[tr_x] = src[y*sstep+x];
		}
    }

    for(int x=sw;x<w;x++)
	{    
        uint8_t* ptr_y = dst+x*dstep;
        for(int y=0;y<sh;y++)
        {
            int tr_x = h-y-1;
 			ptr_y[tr_x] = src[y*sstep+x];
        }
    }

    return;
}

彩图(RGB三通道)右旋90

static void rotate_pos90_rgb(uint8_t* src, size_t sstep, uint8_t* dst, size_t dstep, int w,int h, int cn)
{
    uint8x8x4_t mat10,mat11,mat12;
    uint8x8x4_t mat20,mat21,mat22;
    uint8x8x2_t temp10,temp11,temp12;
    uint8x8x2_t temp20,temp21,temp22;
    uint8x8x2_t temp30,temp31,temp32;
    uint8x8x2_t temp40,temp41,temp42;
    uint16x4x4_t temp110,temp111,temp112;
    uint16x4x4_t temp120,temp121,temp122;
    uint16x4x2_t temp50,temp51,temp52;
    uint16x4x2_t temp60,temp61,temp62;
    uint16x4x2_t temp70,temp71,temp72;
    uint16x4x2_t temp80,temp81,temp82;
    uint32x2x4_t temp210,temp211,temp212;
    uint32x2x4_t temp220,temp221,temp222;
    uint32x2x2_t res10,res11,res12;
    uint32x2x2_t res20,res21,res22;
    uint32x2x2_t res30,res31,res32;
    uint32x2x2_t res40,res41,res42;

    int dw=w&7; int sw=w-dw;
    int dh=h&7; int sh=h-dh;
    
    int x,y;
    for(y=0;y<sh;y=y+8)
    {
		uint8_t* pIndexY = src+y*sstep;
		int tr_x = (h-y-8)*cn;
        for(x=0;x<sw;x=x+8)
        {
			int xIndex = x * cn;
			uint8_t* ptr_y = dst+x*dstep;

			uint8x8x3_t v = vld3_u8(pIndexY + xIndex);
			mat10.val[0] = v.val[0]; mat11.val[0] = v.val[1]; mat12.val[0] = v.val[2];
			
			v = vld3_u8((pIndexY +   sstep) + xIndex);
            mat10.val[1] = v.val[0]; mat11.val[1] = v.val[1]; mat12.val[1] = v.val[2];

            v = vld3_u8((pIndexY + 2*sstep) + xIndex);
			mat10.val[2] = v.val[0]; mat11.val[2] = v.val[1]; mat12.val[2] = v.val[2];

            v = vld3_u8((pIndexY + 3*sstep) + xIndex);
			mat10.val[3] = v.val[0]; mat11.val[3] = v.val[1]; mat12.val[3] = v.val[2];
			
			v = vld3_u8((pIndexY + 4*sstep) + xIndex);
			mat20.val[0] = v.val[0]; mat21.val[0] = v.val[1]; mat22.val[0] = v.val[2];
			
			v = vld3_u8((pIndexY + 5*sstep) + xIndex);
            mat20.val[1] = v.val[0]; mat21.val[1] = v.val[1]; mat22.val[1] = v.val[2];

            v = vld3_u8((pIndexY + 6*sstep) + xIndex);
			mat20.val[2] = v.val[0]; mat21.val[2] = v.val[1]; mat22.val[2] = v.val[2];

            v = vld3_u8((pIndexY + 7*sstep) + xIndex);
			mat20.val[3] = v.val[0]; mat21.val[3] = v.val[1]; mat22.val[3] = v.val[2];

            temp10=vtrn_u8(mat10.val[1],mat10.val[0]); temp11=vtrn_u8(mat11.val[1],mat11.val[0]); temp12=vtrn_u8(mat12.val[1],mat12.val[0]); 
            temp20=vtrn_u8(mat10.val[3],mat10.val[2]); temp21=vtrn_u8(mat11.val[3],mat11.val[2]); temp22=vtrn_u8(mat12.val[3],mat12.val[2]);
            temp30=vtrn_u8(mat20.val[1],mat20.val[0]); temp31=vtrn_u8(mat21.val[1],mat21.val[0]); temp32=vtrn_u8(mat22.val[1],mat22.val[0]);
            temp40=vtrn_u8(mat20.val[3],mat20.val[2]); temp41=vtrn_u8(mat21.val[3],mat21.val[2]); temp42=vtrn_u8(mat22.val[3],mat22.val[2]);

            temp110.val[0]=vreinterpret_u16_u8(temp10.val[0]); temp111.val[0]=vreinterpret_u16_u8(temp11.val[0]); temp112.val[0]=vreinterpret_u16_u8(temp12.val[0]);
            temp110.val[1]=vreinterpret_u16_u8(temp10.val[1]); temp111.val[1]=vreinterpret_u16_u8(temp11.val[1]); temp112.val[1]=vreinterpret_u16_u8(temp12.val[1]);
            temp110.val[2]=vreinterpret_u16_u8(temp20.val[0]); temp111.val[2]=vreinterpret_u16_u8(temp21.val[0]); temp112.val[2]=vreinterpret_u16_u8(temp22.val[0]);
            temp110.val[3]=vreinterpret_u16_u8(temp20.val[1]); temp111.val[3]=vreinterpret_u16_u8(temp21.val[1]); temp112.val[3]=vreinterpret_u16_u8(temp22.val[1]);
            temp120.val[0]=vreinterpret_u16_u8(temp30.val[0]); temp121.val[0]=vreinterpret_u16_u8(temp31.val[0]); temp122.val[0]=vreinterpret_u16_u8(temp32.val[0]);
            temp120.val[1]=vreinterpret_u16_u8(temp30.val[1]); temp121.val[1]=vreinterpret_u16_u8(temp31.val[1]); temp122.val[1]=vreinterpret_u16_u8(temp32.val[1]);
            temp120.val[2]=vreinterpret_u16_u8(temp40.val[0]); temp121.val[2]=vreinterpret_u16_u8(temp41.val[0]); temp122.val[2]=vreinterpret_u16_u8(temp42.val[0]);
            temp120.val[3]=vreinterpret_u16_u8(temp40.val[1]); temp121.val[3]=vreinterpret_u16_u8(temp41.val[1]); temp122.val[3]=vreinterpret_u16_u8(temp42.val[1]);

            temp50=vtrn_u16(temp110.val[2],temp110.val[0]); temp51=vtrn_u16(temp111.val[2],temp111.val[0]); temp52=vtrn_u16(temp112.val[2],temp112.val[0]);
            temp60=vtrn_u16(temp110.val[3],temp110.val[1]); temp61=vtrn_u16(temp111.val[3],temp111.val[1]); temp62=vtrn_u16(temp112.val[3],temp112.val[1]);
            temp70=vtrn_u16(temp120.val[2],temp120.val[0]); temp71=vtrn_u16(temp121.val[2],temp121.val[0]); temp72=vtrn_u16(temp122.val[2],temp122.val[0]);
            temp80=vtrn_u16(temp120.val[3],temp120.val[1]); temp81=vtrn_u16(temp121.val[3],temp121.val[1]); temp82=vtrn_u16(temp122.val[3],temp122.val[1]);

            temp210.val[0]=vreinterpret_u32_u16(temp50.val[0]); temp211.val[0]=vreinterpret_u32_u16(temp51.val[0]); temp212.val[0]=vreinterpret_u32_u16(temp52.val[0]);
            temp210.val[1]=vreinterpret_u32_u16(temp50.val[1]); temp211.val[1]=vreinterpret_u32_u16(temp51.val[1]); temp212.val[1]=vreinterpret_u32_u16(temp52.val[1]);
            temp210.val[2]=vreinterpret_u32_u16(temp60.val[0]); temp211.val[2]=vreinterpret_u32_u16(temp61.val[0]); temp212.val[2]=vreinterpret_u32_u16(temp62.val[0]);
            temp210.val[3]=vreinterpret_u32_u16(temp60.val[1]); temp211.val[3]=vreinterpret_u32_u16(temp61.val[1]); temp212.val[3]=vreinterpret_u32_u16(temp62.val[1]);
            temp220.val[0]=vreinterpret_u32_u16(temp70.val[0]); temp221.val[0]=vreinterpret_u32_u16(temp71.val[0]); temp222.val[0]=vreinterpret_u32_u16(temp72.val[0]);
            temp220.val[1]=vreinterpret_u32_u16(temp70.val[1]); temp221.val[1]=vreinterpret_u32_u16(temp71.val[1]); temp222.val[1]=vreinterpret_u32_u16(temp72.val[1]);
            temp220.val[2]=vreinterpret_u32_u16(temp80.val[0]); temp221.val[2]=vreinterpret_u32_u16(temp81.val[0]); temp222.val[2]=vreinterpret_u32_u16(temp82.val[0]);
            temp220.val[3]=vreinterpret_u32_u16(temp80.val[1]); temp221.val[3]=vreinterpret_u32_u16(temp81.val[1]); temp222.val[3]=vreinterpret_u32_u16(temp82.val[1]);

            res10=vtrn_u32(temp220.val[0],temp210.val[0]); res11=vtrn_u32(temp221.val[0],temp211.val[0]); res12=vtrn_u32(temp222.val[0],temp212.val[0]);
            res20=vtrn_u32(temp220.val[1],temp210.val[1]); res21=vtrn_u32(temp221.val[1],temp211.val[1]); res22=vtrn_u32(temp222.val[1],temp212.val[1]);
            res30=vtrn_u32(temp220.val[2],temp210.val[2]); res31=vtrn_u32(temp221.val[2],temp211.val[2]); res32=vtrn_u32(temp222.val[2],temp212.val[2]);
            res40=vtrn_u32(temp220.val[3],temp210.val[3]); res41=vtrn_u32(temp221.val[3],temp211.val[3]); res42=vtrn_u32(temp222.val[3],temp212.val[3]);

            mat10.val[0]=vreinterpret_u8_u32(res10.val[0]); mat11.val[0]=vreinterpret_u8_u32(res11.val[0]); mat12.val[0]=vreinterpret_u8_u32(res12.val[0]);
            mat10.val[1]=vreinterpret_u8_u32(res30.val[0]); mat11.val[1]=vreinterpret_u8_u32(res31.val[0]); mat12.val[1]=vreinterpret_u8_u32(res32.val[0]);
            mat10.val[2]=vreinterpret_u8_u32(res20.val[0]); mat11.val[2]=vreinterpret_u8_u32(res21.val[0]); mat12.val[2]=vreinterpret_u8_u32(res22.val[0]);
            mat10.val[3]=vreinterpret_u8_u32(res40.val[0]); mat11.val[3]=vreinterpret_u8_u32(res41.val[0]); mat12.val[3]=vreinterpret_u8_u32(res42.val[0]);
            mat20.val[0]=vreinterpret_u8_u32(res10.val[1]); mat21.val[0]=vreinterpret_u8_u32(res11.val[1]); mat22.val[0]=vreinterpret_u8_u32(res12.val[1]);
            mat20.val[1]=vreinterpret_u8_u32(res30.val[1]); mat21.val[1]=vreinterpret_u8_u32(res31.val[1]); mat22.val[1]=vreinterpret_u8_u32(res32.val[1]);
            mat20.val[2]=vreinterpret_u8_u32(res20.val[1]); mat21.val[2]=vreinterpret_u8_u32(res21.val[1]); mat22.val[2]=vreinterpret_u8_u32(res22.val[1]);
            mat20.val[3]=vreinterpret_u8_u32(res40.val[1]); mat21.val[3]=vreinterpret_u8_u32(res41.val[1]); mat22.val[3]=vreinterpret_u8_u32(res42.val[1]);

            v_store_interleave_rotate(ptr_y        +tr_x,mat10.val[0],mat11.val[0],mat12.val[0]);    
            v_store_interleave_rotate(ptr_y+  dstep+tr_x,mat10.val[1],mat11.val[1],mat12.val[1]);
            v_store_interleave_rotate(ptr_y+2*dstep+tr_x,mat10.val[2],mat11.val[2],mat12.val[2]);
            v_store_interleave_rotate(ptr_y+3*dstep+tr_x,mat10.val[3],mat11.val[3],mat12.val[3]);
            v_store_interleave_rotate(ptr_y+4*dstep+tr_x,mat20.val[0],mat21.val[0],mat22.val[0]);
            v_store_interleave_rotate(ptr_y+5*dstep+tr_x,mat20.val[1],mat21.val[1],mat22.val[1]);
            v_store_interleave_rotate(ptr_y+6*dstep+tr_x,mat20.val[2],mat21.val[2],mat22.val[2]);
            v_store_interleave_rotate(ptr_y+7*dstep+tr_x,mat20.val[3],mat21.val[3],mat22.val[3]);
        }
    }

    for(y=sh;y<h;y++)
    {
        int tr_x = (h-y-1)*cn;
        for(x=0;x<w;x++)
		{
            uint8_t* ptr_y = dst+x*dstep;
			for(int n=0;n<cn;n++)
			{
				ptr_y[tr_x+n]=src[y*sstep+x+n];
			}
		}
    }

    for(x=sw;x<w;x++)
	{    
        uint8_t* ptr_y = dst+x*dstep;
        for(y=0;y<sh;y++)
        {
            int tr_x = (h-y-1)*cn;
			for(int n=0;n<cn;n++)
			{
 				ptr_y[tr_x+n]=src[y*sstep+x+n];
			}
        }
    }
    return;
}

 

左旋90

4x4矩阵左旋实例

左旋90度相对右旋而言比较简单了,不需要倒置行数据,其他的类似:

       旋转前矩阵                                    旋转后矩阵

                          

算法步骤:

1)直接每两行做2x2转置(相邻行转置):

          原矩阵                                        相邻行转置                          

                       

2)类似地,每行两个像素值看做一个整体,隔行转置,但是注意这里需要垂直flip一下,不要担心会因此消耗时间,只需要在存储的时候上下倒着水平存储就可以了,并不耗时, 这样便可以得到最终的矩阵左旋结果了:

       上一步结果                                       隔行转置                                  垂直倒置存储

                                             

灰度图(单通道)左旋90

static void rotate_neg90_gray(uint8_t* src, size_t sstep, uint8_t* dst, size_t dstep, int w, int h)
{
    uint8x8x4_t     mat[2]; //use 2 register array to load a 8x8 patch
    uint8x8x2_t     temp8x8x2[4];
    uint16x4x2_t    temp16x4x2[8];
    uint32x2x2_t    temp32x2x2[8];

    int dw=w&7; int sw=w-dw;
    int dh=h&7; int sh=h-dh;
    
    for(int i=0;i<sh;i+=8)
    {
        for(int j=0;j<sw;j+=8)
        {
            //step0 load 8x8 bytes in 8 registers
            mat[0].val[0] = vld1_u8(src+ i   *sstep+j);
            mat[0].val[1] = vld1_u8(src+(i+1)*sstep+j);
            mat[0].val[2] = vld1_u8(src+(i+2)*sstep+j);
            mat[0].val[3] = vld1_u8(src+(i+3)*sstep+j);
            mat[1].val[0] = vld1_u8(src+(i+4)*sstep+j);
            mat[1].val[1] = vld1_u8(src+(i+5)*sstep+j);
            mat[1].val[2] = vld1_u8(src+(i+6)*sstep+j);
            mat[1].val[3] = vld1_u8(src+(i+7)*sstep+j);

            //step1 trn nearby registers
            temp8x8x2[0] = vtrn_u8(mat[0].val[0], mat[0].val[1]);
            temp8x8x2[1] = vtrn_u8(mat[0].val[2], mat[0].val[3]);
            temp8x8x2[2] = vtrn_u8(mat[1].val[0], mat[1].val[1]);
            temp8x8x2[3] = vtrn_u8(mat[1].val[2], mat[1].val[3]);

            //step2 trn 1,3 2,4 5,7 6,8
            temp16x4x2[0].val[0] = vreinterpret_u16_u8(temp8x8x2[0].val[0]);
            temp16x4x2[0].val[1] = vreinterpret_u16_u8(temp8x8x2[0].val[1]);
            temp16x4x2[1].val[0] = vreinterpret_u16_u8(temp8x8x2[1].val[0]);
            temp16x4x2[1].val[1] = vreinterpret_u16_u8(temp8x8x2[1].val[1]);
            temp16x4x2[2].val[0] = vreinterpret_u16_u8(temp8x8x2[2].val[0]);
            temp16x4x2[2].val[1] = vreinterpret_u16_u8(temp8x8x2[2].val[1]);
            temp16x4x2[3].val[0] = vreinterpret_u16_u8(temp8x8x2[3].val[0]);
            temp16x4x2[3].val[1] = vreinterpret_u16_u8(temp8x8x2[3].val[1]);

            temp16x4x2[4] = vtrn_u16(temp16x4x2[0].val[0],temp16x4x2[1].val[0]);
            temp16x4x2[5] = vtrn_u16(temp16x4x2[0].val[1],temp16x4x2[1].val[1]);
            temp16x4x2[6] = vtrn_u16(temp16x4x2[2].val[0],temp16x4x2[3].val[0]);
            temp16x4x2[7] = vtrn_u16(temp16x4x2[2].val[1],temp16x4x2[3].val[1]);

            //step3 trn 1,5 2,6 3,7 4,8
            temp32x2x2[0].val[0] = vreinterpret_u32_u16(temp16x4x2[4].val[0]);
            temp32x2x2[0].val[1] = vreinterpret_u32_u16(temp16x4x2[4].val[1]);
            temp32x2x2[1].val[0] = vreinterpret_u32_u16(temp16x4x2[5].val[0]);
            temp32x2x2[1].val[1] = vreinterpret_u32_u16(temp16x4x2[5].val[1]);
            temp32x2x2[2].val[0] = vreinterpret_u32_u16(temp16x4x2[6].val[0]);
            temp32x2x2[2].val[1] = vreinterpret_u32_u16(temp16x4x2[6].val[1]);
            temp32x2x2[3].val[0] = vreinterpret_u32_u16(temp16x4x2[7].val[0]);
            temp32x2x2[3].val[1] = vreinterpret_u32_u16(temp16x4x2[7].val[1]);

            temp32x2x2[4] = vtrn_u32(temp32x2x2[0].val[0],temp32x2x2[2].val[0]);
            temp32x2x2[5] = vtrn_u32(temp32x2x2[0].val[1],temp32x2x2[2].val[1]);
            temp32x2x2[6] = vtrn_u32(temp32x2x2[1].val[0],temp32x2x2[3].val[0]);
            temp32x2x2[7] = vtrn_u32(temp32x2x2[1].val[1],temp32x2x2[3].val[1]);

            //step4 store bytes in correct position,the order now is 1,2,3,4,5,6,7,8
            temp8x8x2[0].val[0] = vreinterpret_u8_u32(temp32x2x2[7].val[1]);
            temp8x8x2[0].val[1] = vreinterpret_u8_u32(temp32x2x2[5].val[1]);
            temp8x8x2[1].val[0] = vreinterpret_u8_u32(temp32x2x2[6].val[1]);
            temp8x8x2[1].val[1] = vreinterpret_u8_u32(temp32x2x2[4].val[1]);
            temp8x8x2[2].val[0] = vreinterpret_u8_u32(temp32x2x2[7].val[0]);
            temp8x8x2[2].val[1] = vreinterpret_u8_u32(temp32x2x2[5].val[0]);
            temp8x8x2[3].val[0] = vreinterpret_u8_u32(temp32x2x2[6].val[0]);
            temp8x8x2[3].val[1] = vreinterpret_u8_u32(temp32x2x2[4].val[0]);

            vst1_u8(dst+(w-j-8)*dstep+i, temp8x8x2[0].val[0]);
            vst1_u8(dst+(w-j-7)*dstep+i, temp8x8x2[0].val[1]);
            vst1_u8(dst+(w-j-6)*dstep+i, temp8x8x2[1].val[0]);
            vst1_u8(dst+(w-j-5)*dstep+i, temp8x8x2[1].val[1]);
            vst1_u8(dst+(w-j-4)*dstep+i, temp8x8x2[2].val[0]);
            vst1_u8(dst+(w-j-3)*dstep+i, temp8x8x2[2].val[1]);
            vst1_u8(dst+(w-j-2)*dstep+i, temp8x8x2[3].val[0]);
            vst1_u8(dst+(w-j-1)*dstep+i, temp8x8x2[3].val[1]);
        }
    }

    for(int y=sh;y<h;y++)
    {
        int yIndex = y*sstep;
        for(int x=0;x<w;x++)
		{
			dst[(w-x-1)*h+y]=src[yIndex+x];
		}
    }

    for(int x=sw;x<w;x++)
	{    
        int xIndex = (w-x-1)*h;
        for(int y=0;y<sh;y++)
        {
 			dst[xIndex+y]=src[y*sstep+x];
        }
    }

    return;
}

 

彩图(RGB三通道)左旋90 

static void rotate_neg90_rgb(uint8_t* src, size_t sstep, uint8_t* dst, size_t dstep, int w,int h, int cn)
{
    uint8x8x4_t     mat0[2],mat1[2],mat2[2]; //use 2 register array to load a 8x8 patch
    uint8x8x2_t     temp8x8x2_0[4],temp8x8x2_1[4],temp8x8x2_2[4];
    uint16x4x2_t    temp16x4x2_0[8],temp16x4x2_1[8],temp16x4x2_2[8];
    uint32x2x2_t    temp32x2x2_0[8],temp32x2x2_1[8],temp32x2x2_2[8];

    int dw=w&7; int sw=w-dw;
    int dh=h&7; int sh=h-dh;
    
    for(int i=0;i<sh;i+=8)
    {
        for(int j=0;j<sw;j+=8)
        {
            //step0 load 8x8 bytes in 8 registers
            uint8x8x3_t v = vld3_u8(src+ i   *sstep+j*cn);
			mat0[0].val[0] = v.val[0]; mat1[0].val[0] = v.val[1]; mat2[0].val[0] = v.val[2];
			
			v = vld3_u8(src+(i+1)*sstep+j*cn);
            mat0[0].val[1] = v.val[0]; mat1[0].val[1] = v.val[1]; mat2[0].val[1] = v.val[2];

            v = vld3_u8(src+(i+2)*sstep+j*cn);
			mat0[0].val[2] = v.val[0]; mat1[0].val[2] = v.val[1]; mat2[0].val[2] = v.val[2];

            v = vld3_u8(src+(i+3)*sstep+j*cn);
			mat0[0].val[3] = v.val[0]; mat1[0].val[3] = v.val[1]; mat2[0].val[3] = v.val[2];
			
			v = vld3_u8(src+(i+4)*sstep+j*cn);
			mat0[1].val[0] = v.val[0]; mat1[1].val[0] = v.val[1]; mat2[1].val[0] = v.val[2];
			
			v = vld3_u8(src+(i+5)*sstep+j*cn);
            mat0[1].val[1] = v.val[0]; mat1[1].val[1] = v.val[1]; mat2[1].val[1] = v.val[2];

            v = vld3_u8(src+(i+6)*sstep+j*cn);
			mat0[1].val[2] = v.val[0]; mat1[1].val[2] = v.val[1]; mat2[1].val[2] = v.val[2];

            v = vld3_u8(src+(i+7)*sstep+j*cn);
			mat0[1].val[3] = v.val[0]; mat1[1].val[3] = v.val[1]; mat2[1].val[3] = v.val[2];


            //step1 trn nearby registers
            temp8x8x2_0[0] = vtrn_u8(mat0[0].val[0], mat0[0].val[1]); temp8x8x2_1[0] = vtrn_u8(mat1[0].val[0], mat1[0].val[1]); temp8x8x2_2[0] = vtrn_u8(mat2[0].val[0], mat2[0].val[1]);
            temp8x8x2_0[1] = vtrn_u8(mat0[0].val[2], mat0[0].val[3]); temp8x8x2_1[1] = vtrn_u8(mat1[0].val[2], mat1[0].val[3]); temp8x8x2_2[1] = vtrn_u8(mat2[0].val[2], mat2[0].val[3]);
            temp8x8x2_0[2] = vtrn_u8(mat0[1].val[0], mat0[1].val[1]); temp8x8x2_1[2] = vtrn_u8(mat1[1].val[0], mat1[1].val[1]); temp8x8x2_2[2] = vtrn_u8(mat2[1].val[0], mat2[1].val[1]);
            temp8x8x2_0[3] = vtrn_u8(mat0[1].val[2], mat0[1].val[3]); temp8x8x2_1[3] = vtrn_u8(mat1[1].val[2], mat1[1].val[3]); temp8x8x2_2[3] = vtrn_u8(mat2[1].val[2], mat2[1].val[3]);

            //step2 trn 1,3 2,4 5,7 6,8
            temp16x4x2_0[0].val[0] = vreinterpret_u16_u8(temp8x8x2_0[0].val[0]); temp16x4x2_1[0].val[0] = vreinterpret_u16_u8(temp8x8x2_1[0].val[0]); temp16x4x2_2[0].val[0] = vreinterpret_u16_u8(temp8x8x2_2[0].val[0]);
            temp16x4x2_0[0].val[1] = vreinterpret_u16_u8(temp8x8x2_0[0].val[1]); temp16x4x2_1[0].val[1] = vreinterpret_u16_u8(temp8x8x2_1[0].val[1]); temp16x4x2_2[0].val[1] = vreinterpret_u16_u8(temp8x8x2_2[0].val[1]);
            temp16x4x2_0[1].val[0] = vreinterpret_u16_u8(temp8x8x2_0[1].val[0]); temp16x4x2_1[1].val[0] = vreinterpret_u16_u8(temp8x8x2_1[1].val[0]); temp16x4x2_2[1].val[0] = vreinterpret_u16_u8(temp8x8x2_2[1].val[0]);
            temp16x4x2_0[1].val[1] = vreinterpret_u16_u8(temp8x8x2_0[1].val[1]); temp16x4x2_1[1].val[1] = vreinterpret_u16_u8(temp8x8x2_1[1].val[1]); temp16x4x2_2[1].val[1] = vreinterpret_u16_u8(temp8x8x2_2[1].val[1]);
            temp16x4x2_0[2].val[0] = vreinterpret_u16_u8(temp8x8x2_0[2].val[0]); temp16x4x2_1[2].val[0] = vreinterpret_u16_u8(temp8x8x2_1[2].val[0]); temp16x4x2_2[2].val[0] = vreinterpret_u16_u8(temp8x8x2_2[2].val[0]);
            temp16x4x2_0[2].val[1] = vreinterpret_u16_u8(temp8x8x2_0[2].val[1]); temp16x4x2_1[2].val[1] = vreinterpret_u16_u8(temp8x8x2_1[2].val[1]); temp16x4x2_2[2].val[1] = vreinterpret_u16_u8(temp8x8x2_2[2].val[1]);
            temp16x4x2_0[3].val[0] = vreinterpret_u16_u8(temp8x8x2_0[3].val[0]); temp16x4x2_1[3].val[0] = vreinterpret_u16_u8(temp8x8x2_1[3].val[0]); temp16x4x2_2[3].val[0] = vreinterpret_u16_u8(temp8x8x2_2[3].val[0]);
            temp16x4x2_0[3].val[1] = vreinterpret_u16_u8(temp8x8x2_0[3].val[1]); temp16x4x2_1[3].val[1] = vreinterpret_u16_u8(temp8x8x2_1[3].val[1]); temp16x4x2_2[3].val[1] = vreinterpret_u16_u8(temp8x8x2_2[3].val[1]);

            temp16x4x2_0[4] = vtrn_u16(temp16x4x2_0[0].val[0],temp16x4x2_0[1].val[0]); temp16x4x2_1[4] = vtrn_u16(temp16x4x2_1[0].val[0],temp16x4x2_1[1].val[0]); temp16x4x2_2[4] = vtrn_u16(temp16x4x2_2[0].val[0],temp16x4x2_2[1].val[0]);
            temp16x4x2_0[5] = vtrn_u16(temp16x4x2_0[0].val[1],temp16x4x2_0[1].val[1]); temp16x4x2_1[5] = vtrn_u16(temp16x4x2_1[0].val[1],temp16x4x2_1[1].val[1]); temp16x4x2_2[5] = vtrn_u16(temp16x4x2_2[0].val[1],temp16x4x2_2[1].val[1]);
            temp16x4x2_0[6] = vtrn_u16(temp16x4x2_0[2].val[0],temp16x4x2_0[3].val[0]); temp16x4x2_1[6] = vtrn_u16(temp16x4x2_1[2].val[0],temp16x4x2_1[3].val[0]); temp16x4x2_2[6] = vtrn_u16(temp16x4x2_2[2].val[0],temp16x4x2_2[3].val[0]);
            temp16x4x2_0[7] = vtrn_u16(temp16x4x2_0[2].val[1],temp16x4x2_0[3].val[1]); temp16x4x2_1[7] = vtrn_u16(temp16x4x2_1[2].val[1],temp16x4x2_1[3].val[1]); temp16x4x2_2[7] = vtrn_u16(temp16x4x2_2[2].val[1],temp16x4x2_2[3].val[1]);

            //step3 trn 1,5 2,6 3,7 4,8
            temp32x2x2_0[0].val[0] = vreinterpret_u32_u16(temp16x4x2_0[4].val[0]); temp32x2x2_1[0].val[0] = vreinterpret_u32_u16(temp16x4x2_1[4].val[0]); temp32x2x2_2[0].val[0] = vreinterpret_u32_u16(temp16x4x2_2[4].val[0]);
            temp32x2x2_0[0].val[1] = vreinterpret_u32_u16(temp16x4x2_0[4].val[1]); temp32x2x2_1[0].val[1] = vreinterpret_u32_u16(temp16x4x2_1[4].val[1]); temp32x2x2_2[0].val[1] = vreinterpret_u32_u16(temp16x4x2_2[4].val[1]);
            temp32x2x2_0[1].val[0] = vreinterpret_u32_u16(temp16x4x2_0[5].val[0]); temp32x2x2_1[1].val[0] = vreinterpret_u32_u16(temp16x4x2_1[5].val[0]); temp32x2x2_2[1].val[0] = vreinterpret_u32_u16(temp16x4x2_2[5].val[0]);
            temp32x2x2_0[1].val[1] = vreinterpret_u32_u16(temp16x4x2_0[5].val[1]); temp32x2x2_1[1].val[1] = vreinterpret_u32_u16(temp16x4x2_1[5].val[1]); temp32x2x2_2[1].val[1] = vreinterpret_u32_u16(temp16x4x2_2[5].val[1]);
            temp32x2x2_0[2].val[0] = vreinterpret_u32_u16(temp16x4x2_0[6].val[0]); temp32x2x2_1[2].val[0] = vreinterpret_u32_u16(temp16x4x2_1[6].val[0]); temp32x2x2_2[2].val[0] = vreinterpret_u32_u16(temp16x4x2_2[6].val[0]);
            temp32x2x2_0[2].val[1] = vreinterpret_u32_u16(temp16x4x2_0[6].val[1]); temp32x2x2_1[2].val[1] = vreinterpret_u32_u16(temp16x4x2_1[6].val[1]); temp32x2x2_2[2].val[1] = vreinterpret_u32_u16(temp16x4x2_2[6].val[1]);
            temp32x2x2_0[3].val[0] = vreinterpret_u32_u16(temp16x4x2_0[7].val[0]); temp32x2x2_1[3].val[0] = vreinterpret_u32_u16(temp16x4x2_1[7].val[0]); temp32x2x2_2[3].val[0] = vreinterpret_u32_u16(temp16x4x2_2[7].val[0]);
            temp32x2x2_0[3].val[1] = vreinterpret_u32_u16(temp16x4x2_0[7].val[1]); temp32x2x2_1[3].val[1] = vreinterpret_u32_u16(temp16x4x2_1[7].val[1]); temp32x2x2_2[3].val[1] = vreinterpret_u32_u16(temp16x4x2_2[7].val[1]);

            temp32x2x2_0[4] = vtrn_u32(temp32x2x2_0[0].val[0],temp32x2x2_0[2].val[0]); temp32x2x2_1[4] = vtrn_u32(temp32x2x2_1[0].val[0],temp32x2x2_1[2].val[0]); temp32x2x2_2[4] = vtrn_u32(temp32x2x2_2[0].val[0],temp32x2x2_2[2].val[0]);
            temp32x2x2_0[5] = vtrn_u32(temp32x2x2_0[0].val[1],temp32x2x2_0[2].val[1]); temp32x2x2_1[5] = vtrn_u32(temp32x2x2_1[0].val[1],temp32x2x2_1[2].val[1]); temp32x2x2_2[5] = vtrn_u32(temp32x2x2_2[0].val[1],temp32x2x2_2[2].val[1]);
            temp32x2x2_0[6] = vtrn_u32(temp32x2x2_0[1].val[0],temp32x2x2_0[3].val[0]); temp32x2x2_1[6] = vtrn_u32(temp32x2x2_1[1].val[0],temp32x2x2_1[3].val[0]); temp32x2x2_2[6] = vtrn_u32(temp32x2x2_2[1].val[0],temp32x2x2_2[3].val[0]);
            temp32x2x2_0[7] = vtrn_u32(temp32x2x2_0[1].val[1],temp32x2x2_0[3].val[1]); temp32x2x2_1[7] = vtrn_u32(temp32x2x2_1[1].val[1],temp32x2x2_1[3].val[1]); temp32x2x2_2[7] = vtrn_u32(temp32x2x2_2[1].val[1],temp32x2x2_2[3].val[1]);

            //step4 store bytes in correct position,the order now is 1,2,3,4,5,6,7,8
            temp8x8x2_0[0].val[0] = vreinterpret_u8_u32(temp32x2x2_0[7].val[1]); temp8x8x2_1[0].val[0] = vreinterpret_u8_u32(temp32x2x2_1[7].val[1]); temp8x8x2_2[0].val[0] = vreinterpret_u8_u32(temp32x2x2_2[7].val[1]);
            temp8x8x2_0[0].val[1] = vreinterpret_u8_u32(temp32x2x2_0[5].val[1]); temp8x8x2_1[0].val[1] = vreinterpret_u8_u32(temp32x2x2_1[5].val[1]); temp8x8x2_2[0].val[1] = vreinterpret_u8_u32(temp32x2x2_2[5].val[1]);
            temp8x8x2_0[1].val[0] = vreinterpret_u8_u32(temp32x2x2_0[6].val[1]); temp8x8x2_1[1].val[0] = vreinterpret_u8_u32(temp32x2x2_1[6].val[1]); temp8x8x2_2[1].val[0] = vreinterpret_u8_u32(temp32x2x2_2[6].val[1]);
            temp8x8x2_0[1].val[1] = vreinterpret_u8_u32(temp32x2x2_0[4].val[1]); temp8x8x2_1[1].val[1] = vreinterpret_u8_u32(temp32x2x2_1[4].val[1]); temp8x8x2_2[1].val[1] = vreinterpret_u8_u32(temp32x2x2_2[4].val[1]);
            temp8x8x2_0[2].val[0] = vreinterpret_u8_u32(temp32x2x2_0[7].val[0]); temp8x8x2_1[2].val[0] = vreinterpret_u8_u32(temp32x2x2_1[7].val[0]); temp8x8x2_2[2].val[0] = vreinterpret_u8_u32(temp32x2x2_2[7].val[0]);
            temp8x8x2_0[2].val[1] = vreinterpret_u8_u32(temp32x2x2_0[5].val[0]); temp8x8x2_1[2].val[1] = vreinterpret_u8_u32(temp32x2x2_1[5].val[0]); temp8x8x2_2[2].val[1] = vreinterpret_u8_u32(temp32x2x2_2[5].val[0]);
            temp8x8x2_0[3].val[0] = vreinterpret_u8_u32(temp32x2x2_0[6].val[0]); temp8x8x2_1[3].val[0] = vreinterpret_u8_u32(temp32x2x2_1[6].val[0]); temp8x8x2_2[3].val[0] = vreinterpret_u8_u32(temp32x2x2_2[6].val[0]);
            temp8x8x2_0[3].val[1] = vreinterpret_u8_u32(temp32x2x2_0[4].val[0]); temp8x8x2_1[3].val[1] = vreinterpret_u8_u32(temp32x2x2_1[4].val[0]); temp8x8x2_2[3].val[1] = vreinterpret_u8_u32(temp32x2x2_2[4].val[0]);

            v_store_interleave_rotate(dst+(w-j-8)*dstep+i*cn, temp8x8x2_0[0].val[0], temp8x8x2_1[0].val[0], temp8x8x2_2[0].val[0]);
            v_store_interleave_rotate(dst+(w-j-7)*dstep+i*cn, temp8x8x2_0[0].val[1], temp8x8x2_1[0].val[1], temp8x8x2_2[0].val[1]);
            v_store_interleave_rotate(dst+(w-j-6)*dstep+i*cn, temp8x8x2_0[1].val[0], temp8x8x2_1[1].val[0], temp8x8x2_2[1].val[0]);
            v_store_interleave_rotate(dst+(w-j-5)*dstep+i*cn, temp8x8x2_0[1].val[1], temp8x8x2_1[1].val[1], temp8x8x2_2[1].val[1]);
            v_store_interleave_rotate(dst+(w-j-4)*dstep+i*cn, temp8x8x2_0[2].val[0], temp8x8x2_1[2].val[0], temp8x8x2_2[2].val[0]);
            v_store_interleave_rotate(dst+(w-j-3)*dstep+i*cn, temp8x8x2_0[2].val[1], temp8x8x2_1[2].val[1], temp8x8x2_2[2].val[1]);
            v_store_interleave_rotate(dst+(w-j-2)*dstep+i*cn, temp8x8x2_0[3].val[0], temp8x8x2_1[3].val[0], temp8x8x2_2[3].val[0]);
            v_store_interleave_rotate(dst+(w-j-1)*dstep+i*cn, temp8x8x2_0[3].val[1], temp8x8x2_1[3].val[1], temp8x8x2_2[3].val[1]);
        }
    }

    for(int y=sh;y<h;y++)
    {
        int yIndex = y*sstep;
        for(int x=0;x<w;x++)
		{
            for(int n=0;n<cn;n++)
            {
                dst[(w-x-1)*h+cn*y+n]=src[yIndex+cn*x+n];
            }
		}
    }

    for(int x=sw;x<w;x++)
	{    
        int xIndex = (w-x-1)*h;
        for(int y=0;y<sh;y++)
        {
            for(int n=0;n<cn;n++)
            {
                dst[xIndex+cn*y+n]=src[y*sstep+cn*x+n];
            }
        }
    }

    return;
}

v_store_interleave_rotate函数代码:

inline void v_store_interleave_rotate(unsigned char* ptr, const uint8x8_t& a, const uint8x8_t& b, const uint8x8_t& c)
{
	uint8x8x3_t v; 
    v.val[0] = a; 
    v.val[1] = b; 
    v.val[2] = c; 
    vst3_u8(ptr, v); 
}

 

RK3288-linux(硬浮点) 耗时对比(opencv4.1与NEON加速)

Logo

更多推荐