先看下效果,中间主视频流就是大华相机(视频编码H.264),海康相机(视屏编码H.265)

在这里插入图片描述

在这里插入图片描述
前端接入视屏流代码

  <!--视频流-->
            <div id="col2">
                <div class="cell" style="flex: 7; background: none">
                    <div class="cell-box" style="position: relative">
                        <video autoplay muted id="video" class="video" />

                        <div class="cell div-faces">
                            <div class="cell-box">
                                <!--人脸识别-->
                                <div class="faces-wrapper">
                                    <div v-for="i in 5" :key="i" class="face-wrapper">
                                        <div class="face-arrow"></div>
                                        <div
                                            class="face-image"
                                            :style="{
                                                background: faceImages[i - 1]
                                                    ? `url(data:image/jpeg;base64,${
                                                          faceImages[i - 1]
                                                      }) 0 0 / 100% 100% no-repeat`
                                                    : ''
                                            }"></div>
                                    </div>
                                </div>
                            </div>
                        </div>
                    </div>
                </div>
api.post('screen2/init').then((attach) => {
        const { streamerIp, streamerPort, cameraIp, cameraPort, cameraAdmin, cameraPsw } = attach
        webRtcServer = new WebRtcStreamer('video', `${location.protocol}//${streamerIp}:${streamerPort}`)
        webRtcServer.connect(`rtsp://${cameraAdmin}:${cameraPsw}@${cameraIp}:${cameraPort}`)
    })

后台部署需要启动:webrtc-streamer.exe 用来解码视屏流,这样就能实现web页面接入视屏流。

主视屏流下面的相机抓拍图片和预警数据接口是怎么实现的呢?
1、需要把大华相机的sdk加载到项目中sdk下载
在这里插入图片描述
在maven的pom.xml中添加依赖,将上面jar包 依赖到项目中

        <!--外部依赖-->
        <dependency>
            <!--groupId和artifactId不知道随便写-->
            <groupId>com.dahua.netsdk</groupId>
            <artifactId>netsdk-api-main</artifactId>
            <!--依赖范围,必须system-->
            <scope>system</scope>
            <version>1.0-SNAPSHOT</version>
            <!--依赖所在位置-->
            <systemPath>${project.basedir}/libs/netsdk-api-main-1.0.jar</systemPath>
        </dependency>
        <dependency>
            <!--groupId和artifactId不知道随便写-->
            <groupId>com.dahua.netsdk</groupId>
            <artifactId>netsdk-dynamic</artifactId>
            <!--依赖范围,必须system-->
            <scope>system</scope>
            <version>1.0-SNAPSHOT</version>
            <!--依赖所在位置-->
            <systemPath>${project.basedir}/libs/netsdk-dynamic-lib-main-1.0.jar</systemPath>
        </dependency>
        <dependency>
            <!--groupId和artifactId不知道随便写-->
            <groupId>com.dahua.netsdk</groupId>
            <artifactId>netsdk-jna</artifactId>
            <!--依赖范围,必须system-->
            <scope>system</scope>
            <version>1.0-SNAPSHOT</version>
            <!--依赖所在位置-->
            <systemPath>${project.basedir}/libs/jna.jar</systemPath>
        </dependency>

2、然后在StartRunner中(项目启动时执行run方法) 初始化,登录,订阅人脸检测事件,并且定义回调,这里需要注意的是回调AnalyzerDataCB必须是静态单例的。

package ahpu.aip._runner;

import ahpu.aip.service.ConfigService;
import ahpu.aip.service.SceneService;
import ahpu.aip.util.HkUtil;
import ahpu.aip.util.RedisUtils;
import cn.hutool.core.codec.Base64;
import cn.hutool.core.io.FileUtil;
import com.alibaba.fastjson2.JSON;
import com.alibaba.fastjson2.JSONArray;
import com.netsdk.lib.NetSDKLib;
import com.netsdk.lib.ToolKits;
import com.sun.jna.Pointer;
import com.sun.jna.ptr.IntByReference;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
import org.springframework.util.CollectionUtils;

import javax.annotation.Resource;
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;

@Component
public class StartRunner implements CommandLineRunner, DisposableBean {
    private final Logger log = LoggerFactory.getLogger(this.getClass());

    @Resource
    private ConfigService configService;
    @Resource
    private SceneService sceneService;



    @Override
    public void run(String... args) {
//        ==============================初始化大华相机=========================================
        //NetSDK 库初始化
        boolean bInit    = false;
        NetSDKLib netsdkApi = NetSDKLib.NETSDK_INSTANCE;
        // 智能订阅句柄
        NetSDKLib.LLong attachHandle = new NetSDKLib.LLong(0);


        //设备断线回调: 通过 CLIENT_Init 设置该回调函数,当设备出现断线时,SDK会调用该函数
        class DisConnect implements NetSDKLib.fDisConnect {
            public void invoke(NetSDKLib.LLong m_hLoginHandle, String pchDVRIP, int nDVRPort, Pointer dwUser) {
                System.out.printf("Device[%s] Port[%d] DisConnect!\n", pchDVRIP, nDVRPort);
            }
        }
        //网络连接恢复,设备重连成功回调
        // 通过 CLIENT_SetAutoReconnect 设置该回调函数,当已断线的设备重连成功时,SDK会调用该函数
        class HaveReConnect implements NetSDKLib.fHaveReConnect {
            public void invoke(NetSDKLib.LLong m_hLoginHandle, String pchDVRIP, int nDVRPort, Pointer dwUser) {
                System.out.printf("ReConnect Device[%s] Port[%d]\n", pchDVRIP, nDVRPort);
            }
        }



        //登陆参数
        String m_strIp         = "192.168.1.108";
        int m_nPort        	   = 37777;
        String m_strUser       = "admin";
        String m_strPassword   = "admin123456";
        //设备信息
        NetSDKLib.NET_DEVICEINFO_Ex m_stDeviceInfo = new NetSDKLib.NET_DEVICEINFO_Ex(); // 对应CLIENT_LoginEx2
        NetSDKLib.LLong m_hLoginHandle = new NetSDKLib.LLong(0);     // 登陆句柄
        NetSDKLib.LLong m_hAttachHandle = new NetSDKLib.LLong(0);    // 智能订阅句柄

//        初始化
        bInit = netsdkApi.CLIENT_Init(new DisConnect(), null);

        // 设置断联后自动重连
        netsdkApi.CLIENT_SetAutoReconnect(new HaveReConnect(),null);

        if(!bInit) {
            System.out.println("Initialize SDK failed");
        }else{
            System.out.println("Initialize SDK Success");
        }

        // 登录
        int nSpecCap = NetSDKLib.EM_LOGIN_SPAC_CAP_TYPE.EM_LOGIN_SPEC_CAP_TCP; //=0
        IntByReference nError = new IntByReference(0);
        m_hLoginHandle = netsdkApi.CLIENT_LoginEx2(m_strIp, m_nPort, m_strUser, m_strPassword, nSpecCap, null, m_stDeviceInfo, nError);
        if(m_hLoginHandle.longValue() == 0) {
            System.err.printf("Login Device[%s] Port[%d]Failed.\n", m_strIp, m_nPort, ToolKits.getErrorCode());
        } else {
            System.out.println("Login Success [ " + m_strIp + " ]");
        }


        // 订阅
        int bNeedPicture = 1; // 是否需要图片
        m_hAttachHandle =  netsdkApi.CLIENT_RealLoadPictureEx(m_hLoginHandle, 0, NetSDKLib.EVENT_IVS_ALL, bNeedPicture, AnalyzerDataCB.getInstance(), null, null);
        if(m_hAttachHandle.longValue() == 0) {
            System.err.println("CLIENT_RealLoadPictureEx Failed, Error:" + ToolKits.getErrorCode());
        }else {
            System.out.println("订阅成功~");
        }

    }


    private static class AnalyzerDataCB implements NetSDKLib.fAnalyzerDataCallBack {

        private static AnalyzerDataCB instance;
        public static AnalyzerDataCB getInstance() {
            if (instance == null) {
                synchronized (AnalyzerDataCB.class) {
                    if (instance == null) {
                        instance = new AnalyzerDataCB();
                    }
                }
            }
            return instance;
        }

        private static final Logger log = LoggerFactory.getLogger(AnalyzerDataCB.class);

        public static HashMap<String, Object> temMap;

        private int bGlobalScenePic;					//全景图是否存在, 类型为BOOL, 取值为0或者1
        private NetSDKLib.NET_PIC_INFO stuGlobalScenePicInfo;     //全景图片信息


        private NetSDKLib.NET_PIC_INFO stPicInfo;	  			    // 人脸图
        private NetSDKLib.NET_FACE_DATA stuFaceData;			    // 人脸数据

        private int nCandidateNumEx;				    // 当前人脸匹配到的候选对象数量
        private NetSDKLib.CANDIDATE_INFOEX[] stuCandidatesEx;     // 当前人脸匹配到的候选对象信息扩展

        // 全景大图、人脸图、对比图
        private BufferedImage globalBufferedImage = null;
        private BufferedImage personBufferedImage = null;
        private BufferedImage candidateBufferedImage = null;
        String[] faceSexStr = {"未知", "男", "女"};
        // 用于保存对比图的图片缓存,用于多张图片显示
        private ArrayList<BufferedImage> arrayListBuffer = new ArrayList<BufferedImage>();

        @Override
        public int invoke(NetSDKLib.LLong lAnalyzerHandle, int dwAlarmType,
                          Pointer pAlarmInfo, Pointer pBuffer, int dwBufSize,
                          Pointer dwUser, int nSequence, Pointer reserved) {
            // 获取相关事件信息
            getObjectInfo(dwAlarmType, pAlarmInfo);
            if(dwAlarmType == NetSDKLib.EVENT_IVS_FACEDETECT) {  // 人脸检测
                // 保存图片
                savePicture(pBuffer, dwBufSize, stPicInfo);
            }
            return 0;
        }

        /**
         * 获取相关事件信息
         * @param dwAlarmType 事件类型
         * @param pAlarmInfo 事件信息指针
         */
        public void getObjectInfo(int dwAlarmType, Pointer pAlarmInfo) {
            if(pAlarmInfo == null) {
                return;
            }

            switch(dwAlarmType)
            {
                case NetSDKLib.EVENT_IVS_FACEDETECT:   ///< 人脸检测
                {
                    NetSDKLib.DEV_EVENT_FACEDETECT_INFO msg = new NetSDKLib.DEV_EVENT_FACEDETECT_INFO();
                    ToolKits.GetPointerData(pAlarmInfo, msg);
                    stPicInfo = msg.stuObject.stPicInfo;  // 检测到的人脸
                    log.info("口罩状态(0-未知,1-未识别,2-没戴口罩,3-戴口罩了):" + msg.emMask);
                    log.info("时间:"+msg.UTC);
                    RedisUtils.set("mask",msg.emMask==3?"戴口罩":"未戴口罩");
                    RedisUtils.set("time",msg.UTC+"");
                    break;
                }
                default:
                    break;
            }
        }

        /**
         * 保存人脸检测事件图片 ===
         * @param pBuffer 抓拍图片信息
         * @param dwBufSize 抓拍图片大小
         */
        public void savePicture(Pointer pBuffer, int dwBufSize, NetSDKLib.NET_PIC_INFO stPicInfo) {
            File path = new File("./FaceDetected/");
            if (!path.exists()) {
                path.mkdir();
            }

            if (pBuffer == null || dwBufSize <= 0) {
                return;
            }

            /// 保存人脸图 /
            if(stPicInfo != null) {
                byte[] bufferPerson = pBuffer.getByteArray(stPicInfo.dwOffSet, stPicInfo.dwFileLenth);
                ByteArrayInputStream byteArrInputPerson = new ByteArrayInputStream(bufferPerson);

                try {
                    personBufferedImage = ImageIO.read(byteArrInputPerson);
                    if(personBufferedImage == null) {
                        return;
                    }
                    // 将图片转为base64存入Redis中
                    String base64 = Base64.encode(bufferPerson);
                    RedisUtils.set("img","data:image/jpeg;base64,"+base64);
                    String listStr = (String) RedisUtils.get("dahuaList");
                    List<HashMap> list = JSONArray.parseArray(listStr,HashMap.class);
                    HashMap<String,String> tmpResult = new HashMap<String,String>();
                    tmpResult.put("img",(String) RedisUtils.get("img"));
                    tmpResult.put("time",(String) RedisUtils.get("time"));
                    tmpResult.put("mask",(String) RedisUtils.get("mask"));

                    if(CollectionUtils.isEmpty(list)){
                        list = new ArrayList<>();
                        list.add(tmpResult);
                    }else {
                        list.add(tmpResult);
                    }

                    if(list.size()>5){
                        RedisUtils.set("dahuaList", JSON.toJSONString(list.subList(list.size()-5,list.size())));
                    }else {
                        RedisUtils.set("dahuaList",JSON.toJSONString(list));
                    }
                } catch (IOException e2) {
                    e2.printStackTrace();
                }
            }
        }

    }


    @Override
    public void destroy() {
    }
}

获取结果接口类

@RestController
@Validated
@RequestMapping("dahua/")
public class DahuaController {

    @ApiOperation(value = "大华人脸",tags = "大华人脸")
    @GetMapping("getFaceList")
    public R face() {
        String dahuaList = (String) RedisUtils.get("dahuaList");
        List<HashMap> list = JSONArray.parseArray(dahuaList,HashMap.class);
        return R.succ().attach(list);
    }

}

Logo

音视频技术社区,一个全球开发者共同探讨、分享、学习音视频技术的平台,加入我们,与全球开发者一起创造更加优秀的音视频产品!

更多推荐