需求 本文主要将含有编码的H。264,H。265视频流文件解码为原始视频数据,解码后即可渲染到屏幕或用作其他用途。 实现原理 正如我们所知,编码数据仅用于传输,无法直接渲染到屏幕上,所以这里利用苹果原生框架VideoToolbox解析文件中的编码的视频流,并将压缩视频数据(h264h265)解码为指定格式(yuv,RGB)的视频原始数据,以渲染到屏幕上。 注意:本例主要为解码,需要借助FFmpeg搭建模块,视频解析模块,渲染模块,这些模块在下面阅读前提皆有链接可直接访问。 阅读前提音视频基础iOSFFmpeg环境搭建FFmpeg解析视频数据OpenGL渲染视频数据H。264,H。265码流结构 代码地址:VideoDecoder 掘金地址:VideoDecoder 简书地址:VideoDecoder 博客地址:VideoDecoder1、总体架构 总体思想即将FFmpegparse到的数据装到CMBlockBuffer中,将extradata分离出的vps,sps,pps装到CMVideoFormatDesc中,将计算好的时间戳装到CMTime中,最后即可拼成完成的CMSampleBuffer以用来提供给解码器。 CMSampleBufferCreate1。1简易流程 FFmpegparse流程创建formatcontext:avformatalloccontext打开文件流:avformatopeninput寻找流信息:avformatfindstreaminfo获取音视频流的索引值:formatContextstreams〔i〕codecparcodectype(isVideoStream?AVMEDIATYPEVIDEO:AVMEDIATYPEAUDIO)获取音视频流:mformatContextstreams〔maudioStreamIndex〕解析音视频数据帧:avreadframe获取extradata:avbitstreamfilterfilter VideoToolboxdecode流程比较上一次的extradata,如果数据更新需要重新创建解码器分离并保存FFmpegparse到的extradata中分离vps,sps,pps等关键信息(比较NALU头)通过CMVideoFormatDescriptionCreateFromH264ParameterSets,CMVideoFormatDescriptionCreateFromHEVCParameterSets装载vps,sps,pps等NALUheader信息。指定解码器回调函数与解码后视频数据类型(yuv,RGB。。。)创建解码器VTDecompressionSessionCreate生成CMBlockBufferRef装载解码前数据,再将其转为CMSampleBufferRef以提供给解码器。开始解码VTDecompressionSessionDecodeFrame在回调函数中CVImageBufferRef即为解码后的数据,可转为CMSampleBufferRef传出。1。2文件结构 image1。3快速使用初始化preview解码后的视频数据将渲染到该预览层(void)viewDidLoad{〔superviewDidLoad〕;〔selfsetupUI〕;}(void)setupUI{self。previewView〔〔XDXPreviewViewalloc〕initWithFrame:self。view。frame〕;〔self。viewaddSubview:self。previewView〕;〔self。viewbringSubviewToFront:self。startBtn〕;}解析并解码文件中视频数据(void)startDecodeByVTSessionWithIsH265Data:(BOOL)isH265{NSStringpath〔〔NSBundlemainBundle〕pathForResource:isH265?testh265:testh264ofType:MOV〕;XDXAVParseHandlerparseHandler〔〔XDXAVParseHandleralloc〕initWithPath:path〕;XDXVideoDecoderdecoder〔〔XDXVideoDecoderalloc〕init〕;decoder。delegateself;〔parseHandlerstartParseWithCompletionHandler:(BOOLisVideoFrame,BOOLisFinish,structXDXParseVideoDataInfovideoInfo,structXDXParseAudioDataInfoaudioInfo){if(isFinish){〔decoderstopDecoder〕;return;}if(isVideoFrame){〔decoderstartDecodeVideoData:videoInfo〕;}}〕;}将解码后数据渲染到屏幕上 注意:如果数据中含有B帧则需要做一个重排序才能渲染,本例提供两个文件,一个不含B帧的h264类型文件,一个含B帧的h265类型文件。(void)getVideoDecodeDataCallback:(CMSampleBufferRef)sampleBuffer{if(self。isH265File){Note:thefirstframenotneedtosort。if(self。isDecodeFirstFrame){self。isDecodeFirstFrameNO;CVPixelBufferRefpixCMSampleBufferGetImageBuffer(sampleBuffer);〔self。previewViewdisplayPixelBuffer:pix〕;}XDXSortFrameHandlersortHandler〔〔XDXSortFrameHandleralloc〕init〕;sortHandler。delegateself;〔sortHandleraddDataToLinkList:sampleBuffer〕;}else{CVPixelBufferRefpixCMSampleBufferGetImageBuffer(sampleBuffer);〔self。previewViewdisplayPixelBuffer:pix〕;}}(void)getSortedVideoNode:(CMSampleBufferRef)sampleBuffer{int64tpts(int64t)(CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer))1000);staticint64tlastpts0;NSLog(Testmariginlld,ptslastpts);lastptspts;〔self。previewViewdisplayPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)〕;} C音视频学习资料免费获取方法:关注音视频开发T哥,点击链接即可免费获取2023年最新C音视频开发进阶独家免费学习大礼包!2、具体实现2。1从Parse到的数据中检测是否需要更新extradata。 使用FFmpegparse的数据装在XDXParseVideoDataInfo结构体中,结构体定义如下,parse模块可在上文链接中学习,本节只将解码模块。structXDXParseVideoDataInfo{uint8tdata;intdataSize;uint8textraData;intextraDataSize;Float64pts;Float64timebase;intvideoRotate;intfps;CMSampleTimingInfotimingInfo;XDXVideoEncodeFormatvideoFormat;}; 通过缓存当前extradata可以将当前获取的extradata与上一次的进行对比,如果改变需要重新创建解码器,如果没有改变则解码器可复用。(此代码尤其适用于网络流中的视频流,因为视频流可能会改变)uint8textraDatavideoInfoextraData;intsizevideoInfoextraDataSize;BOOLisNeedUpdate〔selfisNeedUpdateExtraDataWithNewExtraData:extraDatanewSize:sizelastData:lastExtraDatalastSize:lastExtraDataSize〕;。。。。。。(BOOL)isNeedUpdateExtraDataWithNewExtraData:(uint8t)newDatanewSize:(int)newSizelastData:(uint8t)lastDatalastSize:(int)lastSize{BOOLisNeedUpdateNO;if(lastSize0){isNeedUpdateYES;}else{if(lastSize!newSize){isNeedUpdateYES;}else{if(memcmp(newData,lastData,newSize)!0){isNeedUpdateYES;}}}if(isNeedUpdate){〔selfdestoryDecoder〕;lastData(uint8t)malloc(newSize);memcpy(lastData,newData,newSize);lastSizenewSize;}returnisNeedUpdate;}2。2从extradata中分离关键信息(h265:vps),sps,pps。 创建解码器必须要有NALUHeader中的一些关键信息,如vps,sps,pps,以用来组成一个CMVideoFormatDesc描述视频信息的数据结构,如上图 注意:h264码流需要sps,pps,h265码流则需要vps,sps,pps分离NALUHeader 首先确定startcode的位置,通过比较前四个字节是否为00000001即可。对于h264的数据,startcode之后紧接着的是sps,pps,对于h265的数据则是vps,sps,pps确定NALUHeader长度 通过sps索引与pps索引值可以确定sps长度,其他类似,注意,码流结构中均以4个字节的startcode作为分界符,所以需要减去对应长度。分离NALUHeader数据 对于h264类型数据将数据上0x1F可以确定NALUheader的类型,对于h265类型数据,将数据上0x4F可以确定NALUheader的类型,这源于h264,h265的码流结构,如果不懂请参考文章最上方阅读前提中码流结构相关文章。 得到对应类型的数据与大小后,将其赋给全局变量,即可供后面使用。if(isNeedUpdate){log4cpluserror(kModuleName,s:updateextradata,func);〔selfgetNALUInfoWithVideoFormat:videoInfovideoFormatextraData:extraDataextraDataSize:sizedecoderInfo:decoderInfo〕;}。。。。。。(void)getNALUInfoWithVideoFormat:(XDXVideoEncodeFormat)videoFormatextraData:(uint8t)extraDataextraDataSize:(int)extraDataSizedecoderInfo:(XDXDecoderInfo)decoderInfo{uint8tdataextraData;intsizeextraDataSize;intstartCodeVPSIndex0;intstartCodeSPSIndex0;intstartCodeFPPSIndex0;intstartCodeRPPSIndex0;intnalutype0;for(inti0;isize;i){if(i3){if(data〔i〕0x01data〔i1〕0x00data〔i2〕0x00data〔i3〕0x00){if(videoFormatXDXH264EncodeFormat){if(startCodeSPSIndex0){startCodeSPSIndexi;}if(istartCodeSPSIndex){startCodeFPPSIndexi;}}elseif(videoFormatXDXH265EncodeFormat){if(startCodeVPSIndex0){startCodeVPSIndexi;continue;}if(istartCodeVPSIndexstartCodeSPSIndex0){startCodeSPSIndexi;continue;}if(istartCodeSPSIndexstartCodeFPPSIndex0){startCodeFPPSIndexi;continue;}if(istartCodeFPPSIndexstartCodeRPPSIndex0){startCodeRPPSIndexi;}}}}}intspsSizestartCodeFPPSIndexstartCodeSPSIndex4;decoderInfospssizespsSize;if(videoFormatXDXH264EncodeFormat){intfppsSizesize(startCodeFPPSIndex1);decoderInfofppssizefppsSize;nalutype((uint8t)data〔startCodeSPSIndex1〕0x1F);if(nalutype0x07){uint8tspsdata〔startCodeSPSIndex1〕;〔selfcopyDataWithOriginDataRef:decoderInfospsnewData:spssize:spsSize〕;}nalutype((uint8t)data〔startCodeFPPSIndex1〕0x1F);if(nalutype0x08){uint8tppsdata〔startCodeFPPSIndex1〕;〔selfcopyDataWithOriginDataRef:decoderInfofppsnewData:ppssize:fppsSize〕;}}else{intvpsSizestartCodeSPSIndexstartCodeVPSIndex4;decoderInfovpssizevpsSize;intfppsSizestartCodeRPPSIndexstartCodeFPPSIndex4;decoderInfofppssizefppsSize;nalutype((uint8t)data〔startCodeVPSIndex1〕0x4F);if(nalutype0x40){uint8tvpsdata〔startCodeVPSIndex1〕;〔selfcopyDataWithOriginDataRef:decoderInfovpsnewData:vpssize:vpsSize〕;}nalutype((uint8t)data〔startCodeSPSIndex1〕0x4F);if(nalutype0x42){uint8tspsdata〔startCodeSPSIndex1〕;〔selfcopyDataWithOriginDataRef:decoderInfospsnewData:spssize:spsSize〕;}nalutype((uint8t)data〔startCodeFPPSIndex1〕0x4F);if(nalutype0x44){uint8tppsdata〔startCodeFPPSIndex1〕;〔selfcopyDataWithOriginDataRef:decoderInfofppsnewData:ppssize:fppsSize〕;}if(startCodeRPPSIndex0){return;}intrppsSizesize(startCodeRPPSIndex1);decoderInforppssizerppsSize;nalutype((uint8t)data〔startCodeRPPSIndex1〕0x4F);if(nalutype0x44){uint8tppsdata〔startCodeRPPSIndex1〕;〔selfcopyDataWithOriginDataRef:decoderInforppsnewData:ppssize:rppsSize〕;}}}(void)copyDataWithOriginDataRef:(uint8t)originDataRefnewData:(uint8t)newDatasize:(int)size{if(originDataRef){free(originDataRef);originDataRefNULL;}originDataRef(uint8t)malloc(size);memcpy(originDataRef,newData,size);}2。3创建解码器 根据编码数据类型确定使用h264解码器还是h265解码器,如上图我们可得知,我们需要将数据拼成一个CMSampleBuffer类型以传给解码器解码。生成CMVideoFormatDescriptionRef 通过(vps)sps,pps信息组成CMVideoFormatDescriptionRef。这里需要注意的是,h265编码数据有的码流数据中含有两个pps,所以在拼装时需要判断以确定参数数量。确定视频数据类型 通过指定kCVPixelFormatType420YpCbCr8BiPlanarFullRange将视频数据类型设置为yuv420sp,如需其他格式可自行更改适配。指定回调函数创建编码器 通过上面提供的所有信息,即可调用VTDecompressionSessionCreate生成解码器上下文对象。createdecoderif(!decoderSession){decoderSession〔selfcreateDecoderWithVideoInfo:videoInfovideoDescRef:decoderFormatDescriptionvideoFormat:kCVPixelFormatType420YpCbCr8BiPlanarFullRangelock:decoderlockcallback:VideoDecoderCallbackdecoderInfo:decoderInfo〕;}(VTDecompressionSessionRef)createDecoderWithVideoInfo:(XDXParseVideoDataInfo)videoInfovideoDescRef:(CMVideoFormatDescriptionRef)videoDescRefvideoFormat:(OSType)videoFormatlock:(pthreadmutext)lockcallback:(VTDecompressionOutputCallback)callbackdecoderInfo:(XDXDecoderInfo)decoderInfo{pthreadmutexlock(lock);OSStatusstatus;if(videoInfovideoFormatXDXH264EncodeFormat){constuint8tconstparameterSetPointers〔2〕{decoderInfo。sps,decoderInfo。fpps};constsizetparameterSetSizes〔2〕{staticcastsizet(decoderInfo。spssize),staticcastsizet(decoderInfo。fppssize)};statusCMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault,2,parameterSetPointers,parameterSetSizes,4,videoDescRef);}elseif(videoInfovideoFormatXDXH265EncodeFormat){if(decoderInfo。rppssize0){constuint8tconstparameterSetPointers〔3〕{decoderInfo。vps,decoderInfo。sps,decoderInfo。fpps};constsizetparameterSetSizes〔3〕{staticcastsizet(decoderInfo。vpssize),staticcastsizet(decoderInfo。spssize),staticcastsizet(decoderInfo。fppssize)};if(available(iOS11。0,)){statusCMVideoFormatDescriptionCreateFromHEVCParameterSets(kCFAllocatorDefault,3,parameterSetPointers,parameterSetSizes,4,NULL,videoDescRef);}else{status1;log4cpluserror(kModuleName,s:Systemversionistoolow!,func);}}else{constuint8tconstparameterSetPointers〔4〕{decoderInfo。vps,decoderInfo。sps,decoderInfo。fpps,decoderInfo。rpps};constsizetparameterSetSizes〔4〕{staticcastsizet(decoderInfo。vpssize),staticcastsizet(decoderInfo。spssize),staticcastsizet(decoderInfo。fppssize),staticcastsizet(decoderInfo。rppssize)};if(available(iOS11。0,)){statusCMVideoFormatDescriptionCreateFromHEVCParameterSets(kCFAllocatorDefault,4,parameterSetPointers,parameterSetSizes,4,NULL,videoDescRef);}else{status1;log4cpluserror(kModuleName,s:Systemversionistoolow!,func);}}}else{status1;}if(status!noErr){log4cpluserror(kModuleName,s:NALUheadererror!,func);pthreadmutexunlock(lock);〔selfdestoryDecoder〕;returnNULL;}uint32tpixelFormatTypevideoFormat;constvoidkeys〔〕{kCVPixelBufferPixelFormatTypeKey};constvoidvalues〔〕{CFNumberCreate(NULL,kCFNumberSInt32Type,pixelFormatType)};CFDictionaryRefattrsCFDictionaryCreate(NULL,keys,values,1,NULL,NULL);VTDecompressionOutputCallbackRecordcallBackRecord;callBackRecord。decompressionOutputCallbackcallback;callBackRecord。decompressionOutputRefCon(bridgevoid)self;VTDecompressionSessionRefsession;statusVTDecompressionSessionCreate(kCFAllocatorDefault,videoDescRef,NULL,attrs,callBackRecord,session);CFRelease(attrs);pthreadmutexunlock(lock);if(status!noErr){log4cpluserror(kModuleName,s:Createdecoderfailed,func);〔selfdestoryDecoder〕;returnNULL;}returnsession;}2。4开始解码将parse出来的原始数据装在XDXDecodeVideoInfo结构体中,以便后续扩展使用。typedefstruct{CVPixelBufferRefoutputPixelbuffer;introtate;Float64pts;intfps;intsourceindex;}XDXDecodeVideoInfo;将编码数据装在CMBlockBufferRef中。通过CMBlockBufferRef生成CMSampleBufferRef解码数据 通过VTDecompressionSessionDecodeFrame函数即可完成解码一帧视频数据。第三个参数可以指定解码采用同步或异步方式。startdecode〔selfstartDecode:videoInfosession:decoderSessionlock:decoderlock〕;。。。。。。(void)startDecode:(XDXParseVideoDataInfo)videoInfosession:(VTDecompressionSessionRef)sessionlock:(pthreadmutext)lock{pthreadmutexlock(lock);uint8tdatavideoInfodata;intsizevideoInfodataSize;introtatevideoInfovideoRotate;CMSampleTimingInfotimingInfovideoInfotimingInfo;uint8ttempData(uint8t)malloc(size);memcpy(tempData,data,size);XDXDecodeVideoInfosourceRef(XDXDecodeVideoInfo)malloc(sizeof(XDXParseVideoDataInfo));sourceRefoutputPixelbufferNULL;sourceRefrotaterotate;sourceRefptsvideoInfopts;sourceReffpsvideoInfofps;CMBlockBufferRefblockBuffer;OSStatusstatusCMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,(void)tempData,size,kCFAllocatorNull,NULL,0,size,0,blockBuffer);if(statuskCMBlockBufferNoErr){CMSampleBufferRefsampleBufferNULL;constsizetsampleSizeArray〔〕{staticcastsizet(size)};statusCMSampleBufferCreateReady(kCFAllocatorDefault,blockBuffer,decoderFormatDescription,1,1,timingInfo,1,sampleSizeArray,sampleBuffer);if(statuskCMBlockBufferNoErrsampleBuffer){VTDecodeFrameFlagsflagskVTDecodeFrameEnableAsynchronousDecompression;VTDecodeInfoFlagsflagOut0;OSStatusdecodeStatusVTDecompressionSessionDecodeFrame(session,sampleBuffer,flags,sourceRef,flagOut);if(decodeStatuskVTInvalidSessionErr){pthreadmutexunlock(lock);〔selfdestoryDecoder〕;if(blockBuffer)CFRelease(blockBuffer);free(tempData);tempDataNULL;CFRelease(sampleBuffer);return;}CFRelease(sampleBuffer);}}if(blockBuffer){CFRelease(blockBuffer);}free(tempData);tempDataNULL;pthreadmutexunlock(lock);}2。5解码后的数据 解码后的数据可在回调函数中获取。这里需要将解码后的数据CVImageBufferRef转为CMSampleBufferRef。然后通过代理传出。pragmamarkCallbackstaticvoidVideoDecoderCallback(voiddecompressionOutputRefCon,voidsourceFrameRefCon,OSStatusstatus,VTDecodeInfoFlagsinfoFlags,CVImageBufferRefpixelBuffer,CMTimepresentationTimeStamp,CMTimepresentationDuration){XDXDecodeVideoInfosourceRef(XDXDecodeVideoInfo)sourceFrameRefCon;if(pixelBufferNULL){log4cpluserror(kModuleName,s:pixelbufferisNULLstatusd,func,status);if(sourceRef){free(sourceRef);}return;}XDXVideoDecoderdecoder(bridgeXDXVideoDecoder)decompressionOutputRefCon;CMSampleTimingInfosampleTime{。presentationTimeStamppresentationTimeStamp,。decodeTimeStamppresentationTimeStamp};CMSampleBufferRefsamplebuffer〔decodercreateSampleBufferFromPixelbuffer:pixelBuffervideoRotate:sourceRefrotatetimingInfo:sampleTime〕;if(samplebuffer){if(〔decoder。delegaterespondsToSelector:selector(getVideoDecodeDataCallback:)〕){〔decoder。delegategetVideoDecodeDataCallback:samplebuffer〕;}CFRelease(samplebuffer);}if(sourceRef){free(sourceRef);}}(CMSampleBufferRef)createSampleBufferFromPixelbuffer:(CVImageBufferRef)pixelBuffervideoRotate:(int)videoRotatetimingInfo:(CMSampleTimingInfo)timingInfo{if(!pixelBuffer){returnNULL;}CVPixelBufferReffinalpixelbufferpixelBuffer;CMSampleBufferRefsamplebufferNULL;CMVideoFormatDescriptionRefvideoInfoNULL;OSStatusstatusCMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault,finalpixelbuffer,videoInfo);statusCMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,finalpixelbuffer,true,NULL,NULL,videoInfo,timingInfo,samplebuffer);if(videoInfo!NULL){CFRelease(videoInfo);}if(samplebufferNULLstatus!noErr){returnNULL;}returnsamplebuffer;}2。6销毁解码器 用完后记得销毁,以便下次使用。if(decoderSession){VTDecompressionSessionWaitForAsynchronousFrames(decoderSession);VTDecompressionSessionInvalidate(decoderSession);CFRelease(decoderSession);decoderSessionNULL;}if(decoderFormatDescription){CFRelease(decoderFormatDescription);decoderFormatDescriptionNULL;}2。7补充:关于带B帧数据重排序问题 注意,如果视频文件或视频流中含有B帧,则渲染时需要对视频帧做一个重排序,本文重点讲解码,排序将在后面文章中更新,代码中以实现,如需了解请下载Demo。 原文链接:iOS鍒敤VideoToolbox瀹炵幇瑙嗛纭鐮绠涔