来源:
15 12 | I had a lot of trouble figuring out how to use Apple's Hardware accelerated video framework to decompress an H.264 video stream. After a few weeks I figured it out and wanted to share an extensive example since I couldn't find one. My goal is to give a thorough, instructive example of Video Toolbox introduced in . My code will not compile or run since it needs to be integrated with an elementary H.264 stream (like a video read from a file or streamed from online etc) and needs to be tweaked depending on the specific case. I should mention that I have very little experience with video en/decoding except what I learned while googling the subject. I don't know all the details about video formats, parameter structure etc. so I've only included what I think you need to know. I am using XCode 6.2 and have deployed to iOS devices that are running iOS 8.1 and 8.2. | |||
|
3 Answers
38 accepted | Concepts:NALUs: NALUs are simply a chunk of data of varying length that has a NALU start code header Parameters: Your decoder needs parameters so it knows how the H.264 video data is stored. The 2 you need to set are Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) and they each have their own NALU type number. You don't need to know what the parameters mean, the decoder knows what to do with them. H.264 Stream Format: In most H.264 streams, you will receive with an initial set of PPS and SPS parameters followed by an i frame (aka IDR frame or flush frame) NALU. Then you will receive several P frame NALUs (maybe a few dozen or so), then another set of parameters (which may be the same as the initial parameters) and an i frame, more P frames, etc. i frames are much bigger than P frames. Conceptually you can think of the i frame as an entire image of the video, and the P frames are just the changes that have been made to that i frame, until you receive the next i frame. Procedure:
Other notes:
Code Example:So let's start by declaring some global variables and including the VT framework (VT = Video Toolbox). #import The following array is only used so that you can print out what type of NALU frame you are receiving. If you know what all these types mean, good for you, you know more about H.264 than me :) My code only handles types 1, 5, 7 and 8. NSString * const naluTypesStrings[] ={ @"0: Unspecified (non-VCL)", @"1: Coded slice of a non-IDR picture (VCL)", // P frame @"2: Coded slice data partition A (VCL)", @"3: Coded slice data partition B (VCL)", @"4: Coded slice data partition C (VCL)", @"5: Coded slice of an IDR picture (VCL)", // I frame @"6: Supplemental enhancement information (SEI) (non-VCL)", @"7: Sequence parameter set (non-VCL)", // SPS parameter @"8: Picture parameter set (non-VCL)", // PPS parameter @"9: Access unit delimiter (non-VCL)", @"10: End of sequence (non-VCL)", @"11: End of stream (non-VCL)", @"12: Filler data (non-VCL)", @"13: Sequence parameter set extension (non-VCL)", @"14: Prefix NAL unit (non-VCL)", @"15: Subset sequence parameter set (non-VCL)", @"16: Reserved (non-VCL)", @"17: Reserved (non-VCL)", @"18: Reserved (non-VCL)", @"19: Coded slice of an auxiliary coded picture without partitioning (non-VCL)", @"20: Coded slice extension (non-VCL)", @"21: Coded slice extension for depth view components (non-VCL)", @"22: Reserved (non-VCL)", @"23: Reserved (non-VCL)", @"24: STAP-A Single-time aggregation packet (non-VCL)", @"25: STAP-B Single-time aggregation packet (non-VCL)", @"26: MTAP16 Multi-time aggregation packet (non-VCL)", @"27: MTAP24 Multi-time aggregation packet (non-VCL)", @"28: FU-A Fragmentation unit (non-VCL)", @"29: FU-B Fragmentation unit (non-VCL)", @"30: Unspecified (non-VCL)", @"31: Unspecified (non-VCL)",}; Now this is where all the magic happens. -(void) receivedRawVideoFrame:(uint8_t *)frame withSize:(uint32_t)frameSize isIFrame:(int)isIFrame{ OSStatus status; uint8_t *data = NULL; uint8_t *pps = NULL; uint8_t *sps = NULL; // I know what my H.264 data source's NALUs look like so I know start code index is always 0. // if you don't know where it starts, you can use a for loop similar to how i find the 2nd and 3rd start codes int startCodeIndex = 0; int secondStartCodeIndex = 0; int thirdStartCodeIndex = 0; long blockLength = 0; CMSampleBufferRef sampleBuffer = NULL; CMBlockBufferRef blockBuffer = NULL; int nalu_type = (frame[startCodeIndex + 4] & 0x1F); NSLog(@"~~~~~~~ Received NALU Type \"%@\" ~~~~~~~~", naluTypesStrings[nalu_type]); // if we havent already set up our format description with our SPS PPS parameters, we // can't process any frames except type 7 that has our parameters if (nalu_type != 7 && _formatDesc == NULL) { NSLog(@"Video error: Frame is not an I Frame and format description is null"); return; } // NALU type 7 is the SPS parameter NALU if (nalu_type == 7) { // find where the second PPS start code begins, (the 0x00 00 00 01 code) // from which we also get the length of the first SPS code for (int i = startCodeIndex + 4; i < startCodeIndex + 40; i++) { if (frame[i] == 0x00 && frame[i+1] == 0x00 && frame[i+2] == 0x00 && frame[i+3] == 0x01) { secondStartCodeIndex = i; _spsSize = secondStartCodeIndex; // includes the header in the size break; } } // find what the second NALU type is nalu_type = (frame[secondStartCodeIndex + 4] & 0x1F); NSLog(@"~~~~~~~ Received NALU Type \"%@\" ~~~~~~~~", naluTypesStrings[nalu_type]); } // type 8 is the PPS parameter NALU if(nalu_type == 8) { // find where the NALU after this one starts so we know how long the PPS parameter is for (int i = _spsSize + 4; i < _spsSize + 30; i++) { if (frame[i] == 0x00 && frame[i+1] == 0x00 && frame[i+2] == 0x00 && frame[i+3] == 0x01) { thirdStartCodeIndex = i; _ppsSize = thirdStartCodeIndex - _spsSize; break; } } // allocate enough data to fit the SPS and PPS parameters into our data objects. // VTD doesn't want you to include the start code header (4 bytes long) so we add the - 4 here sps = malloc(_spsSize - 4); pps = malloc(_ppsSize - 4); // copy in the actual sps and pps values, again ignoring the 4 byte header memcpy (sps, &frame[4], _spsSize-4); memcpy (pps, &frame[_spsSize+4], _ppsSize-4); // now we set our H264 parameters uint8_t* parameterSetPointers[2] = {sps, pps}; size_t parameterSetSizes[2] = {_spsSize-4, _ppsSize-4}; status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault, 2, (const uint8_t *const*)parameterSetPointers, parameterSetSizes, 4, &_formatDesc); NSLog(@"\t\t Creation of CMVideoFormatDescription: %@", (status == noErr) ? @"successful!" : @"failed..."); if(status != noErr) NSLog(@"\t\t Format Description ERROR type: %d", (int)status); // See if decomp session can convert from previous format description // to the new one, if not we need to remake the decomp session. // This snippet was not necessary for my applications but it could be for yours /*BOOL needNewDecompSession = (VTDecompressionSessionCanAcceptFormatDescription(_decompressionSession, _formatDesc) == NO); if(needNewDecompSession) { [self createDecompSession]; }*/ // now lets handle the IDR frame that (should) come after the parameter sets // I say "should" because that's how I expect my H264 stream to work, YMMV nalu_type = (frame[thirdStartCodeIndex + 4] & 0x1F); NSLog(@"~~~~~~~ Received NALU Type \"%@\" ~~~~~~~~", naluTypesStrings[nalu_type]); } // create our VTDecompressionSession. This isnt neccessary if you choose to use AVSampleBufferDisplayLayer if((status == noErr) && (_decompressionSession == NULL)) { [self createDecompSession]; } // type 5 is an IDR frame NALU. The SPS and PPS NALUs should always be followed by an IDR (or IFrame) NALU, as far as I know if(nalu_type == 5) { // find the offset, or where the SPS and PPS NALUs end and the IDR frame NALU begins int offset = _spsSize + _ppsSize; blockLength = frameSize - offset; data = malloc(blockLength); data = memcpy(data, &frame[offset], blockLength); // replace the start code header on this NALU with its size. // AVCC format requires that you do this. // htonl converts the unsigned int from host to network byte order uint32_t dataLength32 = htonl (blockLength - 4); memcpy (data, &dataLength32, sizeof (uint32_t)); // create a block buffer from the IDR NALU status = CMBlockBufferCreateWithMemoryBlock(NULL, data, // memoryBlock to hold buffered data blockLength, // block length of the mem block in bytes. kCFAllocatorNull, NULL, 0, // offsetToData blockLength, // dataLength of relevant bytes, starting at offsetToData 0, &blockBuffer); NSLog(@"\t\t BlockBufferCreation: \t %@", (status == kCMBlockBufferNoErr) ? @"successful!" : @"failed..."); } // NALU type 1 is non-IDR (or PFrame) picture if (nalu_type == 1) { // non-IDR frames do not have an offset due to SPS and PSS, so the approach // is similar to the IDR frames just without the offset blockLength = frameSize; data = malloc(blockLength); data = memcpy(data, &frame[0], blockLength); // again, replace the start header with the size of the NALU uint32_t dataLength32 = htonl (blockLength - 4); memcpy (data, &dataLength32, sizeof (uint32_t)); status = CMBlockBufferCreateWithMemoryBlock(NULL, data, // memoryBlock to hold data. If NULL, block will be alloc when needed blockLength, // overall length of the mem block in bytes kCFAllocatorNull, NULL, 0, // offsetToData blockLength, // dataLength of relevant data bytes, starting at offsetToData 0, &blockBuffer); NSLog(@"\t\t BlockBufferCreation: \t %@", (status == kCMBlockBufferNoErr) ? @"successful!" : @"failed..."); } // now create our sample buffer from the block buffer, if(status == noErr) { // here I'm not bothering with any timing specifics since in my case we displayed all frames immediately const size_t sampleSize = blockLength; status = CMSampleBufferCreate(kCFAllocatorDefault, blockBuffer, true, NULL, NULL, _formatDesc, 1, 0, NULL, 1, &sampleSize, &sampleBuffer); NSLog(@"\t\t SampleBufferCreate: \t %@", (status == noErr) ? @"successful!" : @"failed..."); } if(status == noErr) { // set some values of the sample buffer's attachments CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, YES); CFMutableDictionaryRef dict = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0); CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue); // either send the samplebuffer to a VTDecompressionSession or to an AVSampleBufferDisplayLayer [self render:sampleBuffer]; } // free memory to avoid a memory leak, do the same for sps, pps and blockbuffer if (NULL != data) { free (data); data = NULL; }} The following method creates your VTD session. Recreate it whenever you receive new parameters. (You don't have to recreate it every time you receive parameters, pretty sure.) If you want to set attributes for the destination -(void) createDecompSession{ // make sure to destroy the old VTD session _decompressionSession = NULL; VTDecompressionOutputCallbackRecord callBackRecord; callBackRecord.decompressionOutputCallback = decompressionSessionDecodeFrameCallback; // this is necessary if you need to make calls to Objective C "self" from within in the callback method. callBackRecord.decompressionOutputRefCon = (__bridge void *)self; // you can set some desired attributes for the destination pixel buffer. I didn't use this but you may // if you need to set some attributes, be sure to uncomment the dictionary in VTDecompressionSessionCreate NSDictionary *destinationImageBufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], (id)kCVPixelBufferOpenGLESCompatibilityKey, nil]; OSStatus status = VTDecompressionSessionCreate(NULL, _formatDesc, NULL, NULL, // (__bridge CFDictionaryRef)(destinationImageBufferAttributes) &callBackRecord, &_decompressionSession); NSLog(@"Video Decompression Session Create: \t %@", (status == noErr) ? @"successful!" : @"failed..."); if(status != noErr) NSLog(@"\t\t VTD ERROR type: %d", (int)status);} Now this method gets called every time VTD is done decompressing any frame you sent to it. This method gets called even if there's an error or if the frame is dropped. void decompressionSessionDecodeFrameCallback(void *decompressionOutputRefCon, void *sourceFrameRefCon, OSStatus status, VTDecodeInfoFlags infoFlags, CVImageBufferRef imageBuffer, CMTime presentationTimeStamp, CMTime presentationDuration){ THISCLASSNAME *streamManager = (__bridge THISCLASSNAME *)decompressionOutputRefCon; if (status != noErr) { NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain code:status userInfo:nil]; NSLog(@"Decompressed error: %@", error); } else { NSLog(@"Decompressed sucessfully"); // do something with your resulting CVImageBufferRef that is your decompressed frame [streamManager displayDecodedFrame:imageBuffer]; }} This is where we actually send the sampleBuffer off to the VTD to be decoded. - (void) render:(CMSampleBufferRef)sampleBuffer{ VTDecodeFrameFlags flags = kVTDecodeFrame_EnableAsynchronousDecompression; VTDecodeInfoFlags flagOut; NSDate* currentTime = [NSDate date]; VTDecompressionSessionDecodeFrame(_decompressionSession, sampleBuffer, flags, (void*)CFBridgingRetain(currentTime), &flagOut); CFRelease(sampleBuffer); // if you're using AVSampleBufferDisplayLayer, you only need to use this line of code // [videoLayer enqueueSampleBuffer:sampleBuffer];} If you're using -(void) viewDidLoad{ // create our AVSampleBufferDisplayLayer and add it to the view videoLayer = [[AVSampleBufferDisplayLayer alloc] init]; videoLayer.frame = self.view.frame; videoLayer.bounds = self.view.bounds; videoLayer.videoGravity = AVLayerVideoGravityResizeAspect; // set Timebase, you may need this if you need to display frames at specific times // I didn't need it so I haven't verified that the timebase is working CMTimebaseRef controlTimebase; CMTimebaseCreateWithMasterClock(CFAllocatorGetDefault(), CMClockGetHostTimeClock(), &controlTimebase); //videoLayer.controlTimebase = controlTimebase; CMTimebaseSetTime(self.videoLayer.controlTimebase, kCMTimeZero); CMTimebaseSetRate(self.videoLayer.controlTimebase, 1.0); [[self.view layer] addSublayer:videoLayer];} | ||||||||||||||||||||
|
6 | If you can't find the VTD error codes in the framework, I decided to just include them here. (Again, all these errors and more can be found in the project navigator.) You will get one of these error codes either in the the VTD decode frame callback or when you create your VTD session if you did something incorrectly. kVTPropertyNotSupportedErr = -12900,kVTPropertyReadOnlyErr = -12901,kVTParameterErr = -12902,kVTInvalidSessionErr = -12903,kVTAllocationFailedErr = -12904,kVTPixelTransferNotSupportedErr = -12905, // c.f. -8961kVTCouldNotFindVideoDecoderErr = -12906,kVTCouldNotCreateInstanceErr = -12907,kVTCouldNotFindVideoEncoderErr = -12908,kVTVideoDecoderBadDataErr = -12909, // c.f. -8969kVTVideoDecoderUnsupportedDataFormatErr = -12910, // c.f. -8970kVTVideoDecoderMalfunctionErr = -12911, // c.f. -8960kVTVideoEncoderMalfunctionErr = -12912,kVTVideoDecoderNotAvailableNowErr = -12913,kVTImageRotationNotSupportedErr = -12914,kVTVideoEncoderNotAvailableNowErr = -12915,kVTFormatDescriptionChangeNotSupportedErr = -12916,kVTInsufficientSourceColorDataErr = -12917,kVTCouldNotCreateColorCorrectionDataErr = -12918,kVTColorSyncTransformConvertFailedErr = -12919,kVTVideoDecoderAuthorizationErr = -12210,kVTVideoEncoderAuthorizationErr = -12211,kVTColorCorrectionPixelTransferFailedErr = -12212,kVTMultiPassStorageIdentifierMismatchErr = -12213,kVTMultiPassStorageInvalidErr = -12214,kVTFrameSiloInvalidTimeStampErr = -12215,kVTFrameSiloInvalidTimeRangeErr = -12216,kVTCouldNotFindTemporalFilterErr = -12217,kVTPixelTransferNotPermittedErr = -12218, | |||
|
4 | In addition to VTErrors above, I thought it's worth adding CMFormatDescription, CMBlockBuffer, CMSampleBuffer errors that you may encounter while trying Livy's example. kCMFormatDescriptionError_InvalidParameter = -12710,kCMFormatDescriptionError_AllocationFailed = -12711,kCMFormatDescriptionError_ValueNotAvailable = -12718,kCMBlockBufferNoErr = 0,kCMBlockBufferStructureAllocationFailedErr = -12700,kCMBlockBufferBlockAllocationFailedErr = -12701,kCMBlockBufferBadCustomBlockSourceErr = -12702,kCMBlockBufferBadOffsetParameterErr = -12703,kCMBlockBufferBadLengthParameterErr = -12704,kCMBlockBufferBadPointerParameterErr = -12705,kCMBlockBufferEmptyBBufErr = -12706,kCMBlockBufferUnallocatedBlockErr = -12707,kCMBlockBufferInsufficientSpaceErr = -12708,kCMSampleBufferError_AllocationFailed = -12730,kCMSampleBufferError_RequiredParameterMissing = -12731,kCMSampleBufferError_AlreadyHasDataBuffer = -12732,kCMSampleBufferError_BufferNotReady = -12733,kCMSampleBufferError_SampleIndexOutOfRange = -12734,kCMSampleBufferError_BufferHasNoSampleSizes = -12735,kCMSampleBufferError_BufferHasNoSampleTimingInfo = -12736,kCMSampleBufferError_ArrayTooSmall = -12737,kCMSampleBufferError_InvalidEntryCount = -12738,kCMSampleBufferError_CannotSubdivide = -12739,kCMSampleBufferError_SampleTimingInfoInvalid = -12740,kCMSampleBufferError_InvalidMediaTypeForOperation = -12741,kCMSampleBufferError_InvalidSampleData = -12742,kCMSampleBufferError_InvalidMediaFormat = -12743,kCMSampleBufferError_Invalidated = -12744,kCMSampleBufferError_DataFailed = -16750,kCMSampleBufferError_DataCanceled = -16751, | ||||
|