上次我用AVFoundation的提供的AVCaptureVideoPreviewLayer,把摄像头捕获的数据显示出来。但是一般情况下,数据有可能是网络或者文件中的,这时AVCaptureVideoPreviewLayer就没法用了。
先普及一点视频编码的知识。视频就像电影,由一张张图片连续起来。但是我们存储的时候也这样,就太浪费空间了。一般取一张关键的图片(I帧),下一张图片只保存上一图片不同部分(P帧)。等到下一个画面出现,再重复上面的动作。所以最后保存的是 IPPPPPIPPPPP这种形式,每一段重复称之为GOP。有的编码方式还有B帧,即前向参考帧。视频编码很复杂,推荐这篇文章 http://www.skywind.me/blog/archives/1609 (需翻墙)。
解码播放的流程就相对简单:
- 收到一段GOP数据,丢给解码器
- 解码器还原出一帧一帧的图像,丢给前端
- 前端把图像绘制出来
需要特别指出的是,视频中的图像是YUV编码格式,类似于RGB。为什么要选YUV,因为比较省空间。
上一篇文章提到,AVCaptureSession能够指定不同的输出端,可以是文件,可以是AVCaptureVideoPreviewLayer,当然也可以是原始数据AVCaptureVideoDataOutput。
//-- Create the output for the capture session.
AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES]; // Probably want to set this to NO when recording
//-- Set to YUV420.
[dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; // Necessary for manual preview
// Set dispatch to be on the main thread so OpenGL can do things with the data
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[_captureSession addOutput:dataOutput];
dataOutput需要指定kCVPixelBufferPixelFormatTypeKey,这里用的YUV格式,h264解码出来的也是这种格式。
设置好后,摄像头捕捉的数据就回调到这个方法
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
sampleBuffer就是我们需要的数据对象。
我们这里是Uncompressed Image,而Image就保存在CVPixelBuffer中。
渲染方式有两种:
- CVPixelBuffer转换为CGImage,交给NSImageView或UIImageView
- 交给OpenGL渲染
这次选方法1
/*
http://stackoverflow.com/questions/8838481/kcvpixelformattype-420ypcbcr8biplanarfullrange-frame-to-uiimage-conversion
*/
#define clamp(a) (a>255?255:(a<0?0:a))
- (NSImage *)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t *yBuffer = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
size_t yPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
uint8_t *cbCrBuffer = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
size_t cbCrPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1);
int bytesPerPixel = 4;
uint8_t *rgbBuffer = malloc(width * height * bytesPerPixel);
for(int y = 0; y < height; y++) {
uint8_t *rgbBufferLine = &rgbBuffer[y * width * bytesPerPixel];
uint8_t *yBufferLine = &yBuffer[y * yPitch];
uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];
for(int x = 0; x < width; x++) {
int16_t y = yBufferLine[x];
int16_t cb = cbCrBufferLine[x & ~1] - 128;
int16_t cr = cbCrBufferLine[x | 1] - 128;
uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel];
int16_t r = (int16_t)roundf( y + cr * 1.4 );
int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 );
int16_t b = (int16_t)roundf( y + cb * 1.765);
rgbOutput[0] = 0xff;
rgbOutput[1] = clamp(b);
rgbOutput[2] = clamp(g);
rgbOutput[3] = clamp(r);
}
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rgbBuffer, width, height, 8, width * bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// UIImage *image = [UIImage imageWithCGImage:quartzImage];
NSImage *image = [[NSImage alloc] initWithCGImage:quartzImage size:NSZeroSize];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
CGImageRelease(quartzImage);
free(rgbBuffer);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return image;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
NSImage *nsImage = [self imageFromSampleBuffer:sampleBuffer];
[self.cameraView performSelectorOnMainThread:@selector(setImage:) withObject:nsImage waitUntilDone:YES];
}
转换过程比较繁琐,基本原理就是取出YUV的数据,通过一些数学变换变成RGB,然后再在内存中绘制。
工程代码:https://github.com/annidy/AVCapturePreview2
UIImageView刷新图片是非常耗CPU,整个流程跑下来CPU占用率60%多,而fsp也只有13左右,其中50%都是setImage消耗,而用AVCaptureVideoPreviewLayer总CPU不到5%。
结论:通过ImageView渲染方式理论上可行,但是有比较大的性能问题。