文章目录
- 1. Export - 输出
Export - 输出
To read and write audiovisual assets, you must use the export APIs provided by the AVFoundation framework. The AVAssetExportSession class provides an interface for simple exporting needs, such as modifying the file format or trimming the length of an asset (see Trimming and Transcoding a Movie). For more in-depth exporting needs, use the AVAssetReader and AVAssetWriter classes.
必须使用 AVFoundation
框架提供的导出 APIs
去读写音视频资产。AVAssetExportSession 类为简单输出需要,提供了一个接口,例如修改文件格式或者削减资产的长度(见 Trimming and Transcoding a Movie)。为了更深入的导出需求,使用 AVAssetReader 和 AVAssetWriter 类。
Use an AVAssetReader when you want to perform an operation on the contents of an asset. For example, you might read the audio track of an asset to produce a visual representation of the waveform. To produce an asset from media such as sample buffers or still images, use an AVAssetWriter object.
当你想对一项资产的内容进行操作时,使用 AVAssetReader
。例如,可以读取一个资产的音频轨道,以产生波形的可视化表示。为了从媒体(比如样品缓冲或者静态图像)生成资产,使用 AVAssetWriter
对象。
Note: The asset reader and writer classes are not intended to be used for real-time processing. In fact, an asset reader cannot even be used for reading from a real-time source like an HTTP live stream. However, if you are using an asset writer with a real-time data source, such as an AVCaptureOutput object, set the expectsMediaDataInRealTime property of your asset writer’s inputs to YES. Setting this property to YES for a non-real-time data source will result in your files not being interleaved properly.
注意:资产
reader
和writer
类不打算用到实时处理。实际上,一个资产读取器甚至不能用于从一个类似HTTP
直播流的实时资源中读取。然而,如果你使用带着实时数据资源的资产写入器,比如 AVCaptureOutput 对象,设置资产写入器入口的 expectsMediaDataInRealTime 属性为YES
。将此属性设置为YES
的非实时数据源将导致你的文件不能被正确的扫描。
Reading an Asset - 读取资产
Each AVAssetReader object can be associated only with a single asset at a time, but this asset may contain multiple tracks. For this reason, you must assign concrete subclasses of the AVAssetReaderOutput class to your asset reader before you begin reading in order to configure how the media data is read. There are three concrete subclasses of the AVAssetReaderOutput base class that you can use for your asset reading needs: AVAssetReaderTrackOutput, AVAssetReaderAudioMixOutput, and AVAssetReaderVideoCompositionOutput.
每个 AVAssetReader
对象只能与单个资产有关,但这个资产可能包含多个轨道。为此,你必须指定 AVAssetReaderOutput 类的具体子类给你的资产读取器,在你开始按顺序访问你的资产以配置如何读取数据之前。有 AVAssetReaderOutput
基类的3个具体子类,可以使用你的资产访问需求 AVAssetReaderTrackOutput,AVAssetReaderAudioMixOutput,AVAssetReaderVideoCompositionOutput。
-
Creating the Asset Reader - 创建资产读取器
All you need to initialize an AVAssetReader object is the asset that you want to read.
所有你需要去初始化 AVAssetReader
对象是你想要访问的资产。
NSError *outError;
AVAsset *someAsset = <#AVAsset that you want to read#>;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:someAsset error:&outError];
BOOL success = (assetReader != nil);
Note: Always check that the asset reader returned to you is non-nil to ensure that the asset reader was initialized successfully. Otherwise, the error parameter (outError in the previous example) will contain the relevant error information.
注意:总是要资产读取器是否返回给你的时
non-nil
,以确保资产读取器已经成功被初始化。否则,错误参数(之前的例子中outError
)将会包含有关错误的信息。
-
Setting Up the Asset Reader Outputs - 建立资产读取器出口
After you have created your asset reader, set up at least one output to receive the media data being read. When setting up your outputs, be sure to set the alwaysCopiesSampleData property to NO. In this way, you reap the benefits of performance improvements. In all of the examples within this chapter, this property could and should be set to NO.
在你创建了资产读取器之后,至少设置一个出口以接收正在读取的媒体数据。当建立你的出口,确保设置 alwaysCopiesSampleData 属性为 NO
。这样,你就收获了性能改进的好处。这一章的所有例子中,这个属性可以并且应该被设置为 NO
。
If you want only to read media data from one or more tracks and potentially convert that data to a different format, use the AVAssetReaderTrackOutput class, using a single track output object for each AVAssetTrack object that you want to read from your asset. To decompress an audio track to Linear PCM with an asset reader, you set up your track output as follows:
如果你只想从一个或多个轨道读取媒体数据,潜在的数据转换为不同的格式,使用 AVAssetReaderTrackOutput
类,每个你想从你的资产中读取 AVAssetTrack 对象都使用单轨道出口对象。将音频轨道解压缩为有资产读取器的 Linear PCM
,建立轨道出口如下:
AVAsset *localAsset = assetReader.asset;
// Get the audio track to read.
//获取音频轨道以读取。
AVAssetTrack *audioTrack = [[localAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
// Decompression settings for Linear PCM
//线性PCM的解压缩设置
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the output with the audio track and decompression settings.
//使用音轨和解压缩设置创建输出。
AVAssetReaderOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
// Add the output to the reader if possible.
//如果可能的话,将输出添加到阅读器中。
if ([assetReader canAddOutput:trackOutput])
[assetReader addOutput:trackOutput];
Note: To read the media data from a specific asset track in the format in which it was stored, pass nil to the outputSettings parameter.
注意:从一个特定的资产轨道读取媒体数据,以它被存储的格式,传
nil
给outputSettings
参数。
You use the AVAssetReaderAudioMixOutput and AVAssetReaderVideoCompositionOutput classes to read media data that has been mixed or composited together using an AVAudioMix object or AVVideoComposition object, respectively. Typically, these outputs are used when your asset reader is reading from an AVComposition object.
使用 AVAssetReaderAudioMixOutput
和 AVAssetReaderVideoCompositionOutput
类来读取媒体数据,这些媒体数据是分别使用 AVAudioMix 对象或者 AVVideoComposition 对象混合或者组合在一起。通常情况下,当你的资产读取器正在从 AVComposition 读取时,才使用这些出口。
With a single audio mix output, you can read multiple audio tracks from your asset that have been mixed together using an AVAudioMix object. To specify how the audio tracks are mixed, assign the mix to the AVAssetReaderAudioMixOutput object after initialization. The following code displays how to create an audio mix output with all of the audio tracks from your asset, decompress the audio tracks to Linear PCM, and assign an audio mix object to the output. For details on how to configure an audio mix, see Editing.
一个单一音频混合出口,可以从 已经使用 AVAudioMix
对象混合在一起的资产中读取多个音轨。指定音轨是如何被混合在一起的,将混合后的 AVAssetReaderAudioMixOutput
对象初始化。下面的代码显示了如何从资产中创建一个带着所有音轨的音频混合出口,将音轨解压为 Linear PCM
,并指定音频混合对象到出口。有如何配置音频混合的细节,请参见 Editing 。
AVAudioMix *audioMix = <#An AVAudioMix that specifies how the audio tracks from the AVAsset are mixed#>;
// Assumes that assetReader was initialized with an AVComposition object.
//假设assetReader已使用AVComposition对象初始化。
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the audio tracks to read.
//获取要读取的音轨。
NSArray *audioTracks = [composition tracksWithMediaType:AVMediaTypeAudio];
// Get the decompression settings for Linear PCM.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the audio mix output with the audio tracks and decompression setttings.
//获取线性PCM的解压缩设置。
AVAssetReaderOutput *audioMixOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:decompressionAudioSettings];
// Associate the audio mix used to mix the audio tracks being read with the output.
//将用于混合正在读取的音轨和输出的音频混合关联起来。
audioMixOutput.audioMix = audioMix;
// Add the output to the reader if possible.
//如果可能的话,将输出添加到阅读器中。
if ([assetReader canAddOutput:audioMixOutput])
[assetReader addOutput:audioMixOutput];
Note: Passing nil for the audioSettings parameter tells the asset reader to return samples in a convenient uncompressed format. The same is true for the AVAssetReaderVideoCompositionOutput class.
注意:给
audioSettings
参数传递nil
,告诉资产读取器返回一个方便的未压缩格式的样本。对于AVAssetReaderVideoCompositionOutput
类同样是可以的。
The video composition output behaves in much the same way: You can read multiple video tracks from your asset that have been composited together using an AVVideoComposition object. To read the media data from multiple composited video tracks and decompress it to ARGB, set up your output as follows:
视频合成输出行为有许多同样的方式:可以从资产(已经被使用 AVVideoComposition
对象合并在一起)读取多个视频轨道。从多个复合视频轨道读取媒体数据,解压缩为 ARGB
,建立出口如下:
AVVideoComposition *videoComposition = <#An AVVideoComposition that specifies how the video tracks from the AVAsset are composited#>;
// Assumes assetReader was initialized with an AVComposition.
//假设assetReader是用AVComposition初始化的。
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the video tracks to read.
//获取要读取的视频轨道。
NSArray *videoTracks = [composition tracksWithMediaType:AVMediaTypeVideo];
// Decompression settings for ARGB.
// ARGB的解压缩设置
NSDictionary *decompressionVideoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB], (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary] };
// Create the video composition output with the video tracks and decompression setttings.
//通过视频轨道和解压缩设置创建视频合成输出。
AVAssetReaderOutput *videoCompositionOutput = [AVAssetReaderVideoCompositionOutput assetReaderVideoCompositionOutputWithVideoTracks:videoTracks videoSettings:decompressionVideoSettings];
// Associate the video composition used to composite the video tracks being read with the output.
//将用于复合正在读取的视频轨道的视频组合与输出相关联。
videoCompositionOutput.videoComposition = videoComposition;
// Add the output to the reader if possible.
//如果可能的话,将输出添加到阅读器中。
if ([assetReader canAddOutput:videoCompositionOutput])
[assetReader addOutput:videoCompositionOutput];
-
Reading the Asset’s Media Data - 读取资产媒体数据
To start reading after setting up all of the outputs you need, call the startReading method on your asset reader. Next, retrieve the media data individually from each output using the copyNextSampleBuffer method. To start up an asset reader with a single output and read all of its media samples, do the following:
开始读取后建立所有你需要的出口,在你的资产读取器中调用 startReading 方法。下一步,使用 copyNextSampleBuffer 方法从每个出口分别获取媒体数据。以一个出口启动一个资产读取器,并读取它的所有媒体样本,跟着下面做:
// Start the asset reader up.
//启动资产读取器。
[self.assetReader startReading];
BOOL done = NO;
while (!done)
{
// Copy the next sample buffer from the reader output.
//从阅读器输出中复制下一个样本缓冲区。
CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer)
{
// Do something with sampleBuffer here.
//在这里用sampleBuffer做些什么。
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why the asset reader output couldn't copy another sample buffer.
//找出为什么资产读取器输出无法复制另一个样本缓冲区。
if (self.assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = self.assetReader.error;
// Handle the error here.
//在这里处理错误。
}
else
{
// The asset reader output has read all of its samples.
//资产读取器输出已读取其所有样本。
done = YES;
}
}
}
Writing an Asset - 写入资产
The AVAssetWriter class to write media data from multiple sources to a single file of a specified file format. You don’t need to associate your asset writer object with a specific asset, but you must use a separate asset writer for each output file that you want to create. Because an asset writer can write media data from multiple sources, you must create an AVAssetWriterInput object for each individual track that you want to write to the output file. Each AVAssetWriterInput object expects to receive data in the form of CMSampleBufferRef objects, but if you want to append CVPixelBufferRef objects to your asset writer input, use the AVAssetWriterInputPixelBufferAdaptor class.
AVAssetWriter 类从多个源将媒体数据写入到指定文件格式的单个文件中。不需要将你的资产写入器与一个特定的资产联系起来,但你必须为你要创建的每个输出文件 使用一个独立的资产写入器。因为一个资产写入器可以从多个来源写入媒体数据,你必须为你想写入输出文件的每个独立的轨道创建一个 AVAssetWriterInput 对象。每个 AVAssetWriterInput
对象预计以 CMSampleBufferRef 对象的形成接收数据,但如果你想给你的资产写入器入口 附加 CVPixelBufferRef 对象,使用 AVAssetWriterInputPixelBufferAdaptor 类。
To create an asset writer, specify the URL for the output file and the desired file type. The following code displays how to initialize an asset writer to create a QuickTime movie:
为了创建一个资产写入器,为出口文件指定 URL
和所需的文件类型。下面的代码显示了如何初始化一个资产写入器来创建一个 QuickTime
影片:
NSError *outError;
NSURL *outputURL = <#NSURL object representing the URL where you want to save the video#>;
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outputURL
fileType:AVFileTypeQuickTimeMovie
error:&outError];
BOOL success = (assetWriter != nil);
-
Setting Up the Asset Writer Inputs - 建立资产写入器入口
For your asset writer to be able to write media data, you must set up at least one asset writer input. For example, if your source of media data is already vending media samples as CMSampleBufferRef objects, just use the AVAssetWriterInput class. To set up an asset writer input that compresses audio media data to 128 kbps AAC and connect it to your asset writer, do the following:
为你的资产写入器能够写入媒体数据,必须至少设置一个资产写入器入口。例如,如果你的媒体数据源已经以 CMSampleBufferRef
对象声明了声明了媒体样本,只使用 AVAssetWriterInput
类。建立一个资产写入器入口,将音频媒体数据压缩到 128 kbps AAC
并且将它与你的资产写入器连接,跟着下面做:
// Configure the channel layout as stereo.
//将通道布局配置为立体声。
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};
// Convert the channel layout object to an NSData object.
//将通道布局对象转换为NSData对象。
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
// Get the compression settings for 128 kbps AAC.
//获取128 kbps AAC的压缩设置s
NSDictionary *compressionAudioSettings = @{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};
// Create the asset writer input with the compression settings and specify the media type as audio.
//使用压缩设置创建资产编写器输入,并将媒体类型指定为音频。
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:compressionAudioSettings];
// Add the input to the writer if possible.
//如果可能的话,将输入添加到写入器中。
if ([assetWriter canAddInput:assetWriterInput])
[assetWriter addInput:assetWriterInput];
Note: If you want the media data to be written in the format in which it was stored, pass nil in the outputSettings parameter. Pass nil only if the asset writer was initialized with a fileType of AVFileTypeQuickTimeMovie.
注意:如果你想让媒体数据以它被存储的格式写入,给
outputSettings
参数传nil
。只有资产写入器曾用 AVFileTypeQuickTimeMovie 的fileType
初始化,才传nil
。
Your asset writer input can optionally include some metadata or specify a different transform for a particular track using the metadata and transform properties respectively. For an asset writer input whose data source is a video track, you can maintain the video’s original transform in the output file by doing the following:
你的资产写入器入口可以选择性的包含一些元数据 或者 分别使用 metadata 和 transform 属性为特定的轨道指定不同的变换。对于一个资产写入器的入口,其数据源是一个视频轨道,可以通过下面示例来在输出文件中维持视频的原始变换:
AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
assetWriterInput.transform = videoAssetTrack.preferredTransform;
Note: Set the metadata and transform properties before you begin writing with your asset writer for them to take effect.
注意:在开始用资产写入器写入生效之前,先设置
metadata
和transform
属性。
When writing media data to the output file, sometimes you may want to allocate pixel buffers. To do so, use the AVAssetWriterInputPixelBufferAdaptor class. For greatest efficiency, instead of adding pixel buffers that were allocated using a separate pool, use the pixel buffer pool provided by the pixel buffer adaptor. The following code creates a pixel buffer object working in the RGB domain that will use CGImage objects to create its pixel buffers.
当将媒体数据写入输出文件时,有时你可能要分配像素缓冲区。这样做:使用 AVAssetWriterInputPixelBufferAdaptor
类。为了最大的效率,使用由像素缓冲适配器提供的像素缓冲池,代替添加被分配使用一个单独池的像素缓冲区。下面的代码创建一个像素缓冲区对象,在 RGB
色彩下工作,将使用 CGImage 对象创建它的像素缓冲。
NSDictionary *pixelBufferAttributes = @{
kCVPixelBufferCGImageCompatibilityKey : [NSNumber numberWithBool:YES],
kCVPixelBufferCGBitmapContextCompatibilityKey : [NSNumber numberWithBool:YES],
kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32ARGB]
};
AVAssetWriterInputPixelBufferAdaptor *inputPixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:pixelBufferAttributes];
Note: All AVAssetWriterInputPixelBufferAdaptor objects must be connected to a single asset writer input. That asset writer input must accept media data of type AVMediaTypeVideo.
注:所有的
AVAssetWriterInputPixelBufferAdaptor
对象必须连接到一个单独的资产写入器入口。资产写入器入口必须接受 AVMediaTypeVideo 类型的媒体数据。
-
Writing Media Data - 写入媒体数据
When you have configured all of the inputs needed for your asset writer, you are ready to begin writing media data. As you did with the asset reader, initiate the writing process with a call to the startWriting method. You then need to start a sample-writing session with a call to the startSessionAtSourceTime: method. All writing done by an asset writer has to occur within one of these sessions and the time range of each session defines the time range of media data included from within the source. For example, if your source is an asset reader that is supplying media data read from an AVAsset object and you don’t want to include media data from the first half of the asset, you would do the following:
当你已经为资产写入器配置所有需要的入口时,这时已经准备好开始写入媒体数据。正如在资产读取器所做的,调用 startWriting 方法发起写入过程。然后你需要启动一个样本 – 调用 startSessionAtSourceTime: 方法的写入会话。资产写入器的所有写入都必须在这些会话中发生,并且每个会话的时间范围 定义 包含在来源内媒体数据的时间范围。例如,如果你的来源是一个资产读取器(它从 AVAsset 对象读取到供应的媒体数据),并且你不想包含来自资产的前半部分的媒体数据,你可以像下面这样做:
CMTime halfAssetDuration = CMTimeMultiplyByFloat64(self.asset.duration, 0.5);
[self.assetWriter startSessionAtSourceTime:halfAssetDuration];
//Implementation continues.
Normally, to end a writing session you must call the endSessionAtSourceTime: method. However, if your writing session goes right up to the end of your file, you can end the writing session simply by calling the finishWriting method. To start up an asset writer with a single input and write all of its media data, do the following:
通常,必须调用 endSessionAtSourceTime: 方法结束写入会话。然而,如果你的写入会话正确走到了你的文件末尾,可以简单地通过调用 finishWriting 方法来结束写入会话。要启动一个有单一入口的资产写入器并且写入所有媒体数据。下面示例:
// Prepare the asset writer for writing.
//准备撰写资产作者。
[self.assetWriter startWriting];
// Start a sample-writing session.
//开始一个示例写作会话。
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
//指定要在资产写入器准备接收媒体数据时执行的块,以及调用它的队列。
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
while ([self.assetWriterInput isReadyForMoreMediaData])
{
// Get the next sample buffer.
//获取下一个采样缓冲区。
CMSampleBufferRef nextSampleBuffer = [self copyNextSampleBufferToWrite];
if (nextSampleBuffer)
{
// If it exists, append the next sample buffer to the output file.
//如果存在,请将下一个样本缓冲区附加到输出文件。
[self.assetWriterInput appendSampleBuffer:nextSampleBuffer];
CFRelease(nextSampleBuffer);
nextSampleBuffer = nil;
}
else
{
// Assume that lack of a next sample buffer means the sample buffer source is out of samples and mark the input as finished.
//假设缺少下一个采样缓冲区意味着采样缓冲源超出采样范围,并将输入标记为已完成。
[self.assetWriterInput markAsFinished];
break;
}
}
}];
The copyNextSampleBufferToWrite method in the code above is simply a stub. The location of this stub is where you would need to insert some logic to return CMSampleBufferRef objects representing the media data that you want to write. One possible source of sample buffers is an asset reader output.
上述代码中的 copyNextSampleBufferToWrite
方法仅仅是一个 stub
。这个 stub
的位置就是你需要插入一些逻辑 去返回 CMSampleBufferRef
对象 表示你想要写入的媒体数据。示例缓冲区的可能来源是一个资产读取器出口。
Reencoding Assets - 重新编码资产
You can use an asset reader and asset writer object in tandem to convert an asset from one representation to another. Using these objects, you have more control over the conversion than you do with an AVAssetExportSession object. For example, you can choose which of the tracks you want to be represented in the output file, specify your own output format, or modify the asset during the conversion process. The first step in this process is just to set up your asset reader outputs and asset writer inputs as desired. After your asset reader and writer are fully configured, you start up both of them with calls to the startReading and startWriting methods, respectively. The following code snippet displays how to use a single asset writer input to write media data supplied by a single asset reader output:
可以使用资产读取器和资产写入器对象,以一个表现转换到另一个表现的资产。使用这些对象,你必须比用 AVAssetExportSession
对象有更多的控制转换。例如,你可以选择输出文件中想要显示的轨道,指定你自己的输出格式,或者在转换过程中修改该资产。这个过程中第一步是按需建立你的资产读取器出口和资产写入器入口。资产读取器和写入器充分配置后,分别调用 startReading
和 startWriting
方法启动它们。下面的代码片段显示了如何使用一个单一的资产写入器入口去写入 由一个单一的资产读取器出口提供的媒体数据:
NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];
// Create a serialization queue for reading and writing.
//为读写创建一个序列化队列。
dispatch_queue_t serializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
//指定要在资产写入器准备接收媒体数据时执行的块,以及调用它的队列。
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:serializationQueue usingBlock:^{
while ([self.assetWriterInput isReadyForMoreMediaData])
{
// Get the asset reader output's next sample buffer.
//获取资产读取器输出的下一个样本缓冲区。
CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
// If it exists, append this sample buffer to the output file.
//如果存在,请将此示例缓冲区附加到输出文件。
BOOL success = [self.assetWriterInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
// Check for errors that may have occurred when appending the new sample buffer.
//检查附加新样本缓冲区时可能发生的错误。
if (!success && self.assetWriter.status == AVAssetWriterStatusFailed)
{
NSError *failureError = self.assetWriter.error;
//Handle the error.//处理错误。
}
}
else
{
// If the next sample buffer doesn't exist, find out why the asset reader output couldn't vend another one.
//如果下一个样本缓冲区不存在,请找出为什么资产读取器输出无法再销售另一个样本缓冲区。
if (self.assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = self.assetReader.error;
//Handle the error here.//处理错误。
}
else
{
// The asset reader output must have vended all of its samples. Mark the input as finished.
//资产读取器输出必须包含所有示例。将输入标记为已完成。
[self.assetWriterInput markAsFinished];
break;
}
}
}
}];
Putting It All Together: Using an Asset Reader and Writer in Tandem to Reencode an Asset - 总结:使用资产读取器和写入器串联重新编码资产
This brief code example illustrates how to use an asset reader and writer to reencode the first video and audio track of an asset into a new file. It shows how to:
- Use serialization queues to handle the asynchronous nature of reading and writing audiovisual data
- Initialize an asset reader and configure two asset reader outputs, one for audio and one for video
- Initialize an asset writer and configure two asset writer inputs, one for audio and one for video
- Use an asset reader to asynchronously supply media data to an asset writer through two different - output/input combinations
- Use a dispatch group to be notified of completion of the reencoding process
- Allow a user to cancel the reencoding process once it has begun
这个剪短的代码示例说明如何使用资产读取器和写入器将一个资产的第一个视频和音频轨道重新编码 到一个新文件。它展示了:
- 使用序列化队列来处理读写视听数据的异步性
- 初始化一个资产读取器,并配置两个资产读取器出口,一个用于音频,一个用于视频
- 初始化一个资产写入器,并配置两个资产写入器入口,一个用于音频,一个用于视频
- 使用一个资产读取器,通过两个不同的 输出/输入组合来异步向资产写入器提供媒体数据
- 使用一个调度组接收重新编码过程的完成的通知
- 一旦开始,允许用户取消重新编码过程
Note: To focus on the most relevant code, this example omits several aspects of a complete application. To use AVFoundation, you are expected to have enough experience with Cocoa to be able to infer the missing pieces.
注:关注最相关的代码,这个例子中省略了一个完成应用程序的几个方面。为了使用
AVFoundation
,希望你有足够的Cocoa
经验,能够推断缺少的代码。
-
Handling the Initial Setup - 处理初始设置
Before you create your asset reader and writer and configure their outputs and inputs, you need to handle some initial setup. The first part of this setup involves creating three separate serialization queues to coordinate the reading and writing process.
在创建资产读取器和写入器和配置它们的出口和入口之前,你需要处理一下初始设置。此设置的第一部分包括创建3个独立的序列化队列来协调读写过程。
NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];
// Create the main serialization queue.
//创建主序列化队列。
self.mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];
// Create the serialization queue to use for reading and writing the audio data.
//创建序列化队列用于读取和写入音频数据。
self.rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];
// Create the serialization queue to use for reading and writing the video data.
//创建序列化队列用于读取和写入视频数据。
self.rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);
The main serialization queue is used to coordinate the starting and stopping of the asset reader and writer (perhaps due to cancellation) and the other two serialization queues are used to serialize the reading and writing by each output/input combination with a potential cancellation.
主序列队列用于协调资产读取器和写入器(可能是由于注销)的启动和停止,其他两个序列队列用于序列化读取器和写入器,通过每一个有潜在注销的输入/输出组合。
Now that you have some serialization queues, load the tracks of your asset and begin the reencoding process.
现在你有一些序列化队列,加载你的资产轨道,并开始重新编码过程。
self.asset = <#AVAsset that you want to reencode#>;
self.cancelled = NO;
self.outputURL = <#NSURL representing desired output URL for file generated by asset writer#>;
// Asynchronously load the tracks of the asset you want to read.
//异步加载您想要读取的资产的曲目。
[self.asset loadValuesAsynchronouslyForKeys:@[@"tracks"] completionHandler:^{
// Once the tracks have finished loading, dispatch the work to the main serialization queue.
//一旦轨道完成加载,将工作分派到主序列化队列。
dispatch_async(self.mainSerializationQueue, ^{
// Due to asynchronous nature, check to see if user has already cancelled.
//检查加载资产轨迹是否成功。
if (self.cancelled)
return;
BOOL success = YES;
NSError *localError = nil;
// Check for success of loading the assets tracks.
success = ([self.asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
if (success)
{
// If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
//如果音轨加载成功,请确保资产写入器的输出路径中不存在任何文件。
NSFileManager *fm = [NSFileManager defaultManager];
NSString *localOutputPath = [self.outputURL path];
if ([fm fileExistsAtPath:localOutputPath])
success = [fm removeItemAtPath:localOutputPath error:&localError];
}
if (success)
success = [self setupAssetReaderAndAssetWriter:&localError];
if (success)
success = [self startAssetReaderAndWriter:&localError];
if (!success)
[self readingAndWritingDidFinishSuccessfully:success withError:localError];
});
}];
When the track loading process finishes, whether successfully or not, the rest of the work is dispatched to the main serialization queue to ensure that all of this work is serialized with a potential cancellation. Now all that’s left is to implement the cancellation process and the three custom methods at the end of the previous code listing.
当轨道加载过程结束后,无论成功与否,剩下的工作就是被分配到主序列队列以确保所有的工作都是有潜在注销的序列化。现在,剩下就是实现注销进程和前面的代码清单的结尾处的3个自定义方法。
-
Initializing the Asset Reader and Writer - 初始化资产读取器和写入器
The custom setupAssetReaderAndAssetWriter: method initializes the reader and writer and configures two output/input combinations, one for an audio track and one for a video track. In this example, the audio is decompressed to Linear PCM using the asset reader and compressed back to 128 kbps AAC using the asset writer. The video is decompressed to YUV using the asset reader and compressed to H.264 using the asset writer.
自定义 setupAssetReaderAndAssetWriter:
方法初始化读取器和写入器,并且配置两个输入/输出组合,一个用于音频轨道,一个用于视频轨道。在这个例子中,使用资产读取器音频被解压缩到 Linear PCM
,使用资产写入器压缩回 128 kbps AAC
。使用资产读取器将视频解压缩到 YUV
,使用资产写入器压缩为 H.264
。
- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
// Create and initialize the asset reader.
//创建并初始化资产读取器。
self.assetReader = [[AVAssetReader alloc] initWithAsset:self.asset error:outError];
BOOL success = (self.assetReader != nil);
if (success)
{
// If the asset reader was successfully initialized, do the same for the asset writer.
/如果资产读取器已成功初始化,则对资产写入器执行相同操作
self.assetWriter = [[AVAssetWriter alloc] initWithURL:self.outputURL
fileType:AVFileTypeQuickTimeMovie
error:outError];
success = (self.assetWriter != nil);
}
if (success)
{
// If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
//如果读取器和写入器都已成功初始化,请抓取将要使用的音频和视频资产轨道。
AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
NSArray *audioTracks = [self.asset tracksWithMediaType:AVMediaTypeAudio];
if ([audioTracks count] > 0)
assetAudioTrack = [audioTracks objectAtIndex:0];
NSArray *videoTracks = [self.asset tracksWithMediaType:AVMediaTypeVideo];
if ([videoTracks count] > 0)
assetVideoTrack = [videoTracks objectAtIndex:0];
if (assetAudioTrack)
{
// If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
//如果有音轨需要读取,请将解压缩设置设置为线性PCM并创建资产读取器输出。
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
self.assetReaderAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack
outputSettings:decompressionAudioSettings];
[self.assetReader addOutput:self.assetReaderAudioOutput];
// Then, set the compression settings to 128kbps AAC and create the asset writer input.
//然后,将压缩设置设置为128kbps AAC并创建资产写入器输入。
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
NSDictionary *compressionAudioSettings = @{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};
self.assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType]
outputSettings:compressionAudioSettings];
[self.assetWriter addInput:self.assetWriterAudioInput];
}
if (assetVideoTrack)
{
// If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
//如果有视频轨道要读取,请为YUV设置解压缩设置并创建资产读取器输出。
NSDictionary *decompressionVideoSettings = @{
(id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8],
(id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
};
self.assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack
outputSettings:decompressionVideoSettings];
[self.assetReader addOutput:self.assetReaderVideoOutput];
CMFormatDescriptionRef formatDescription = NULL;
// Grab the video format descriptions from the video track and grab the first one if it exists.
//从视频轨道抓取视频格式描述,如果存在,抓住第一个。
NSArray *videoFormatDescriptions = [assetVideoTrack formatDescriptions];
if ([videoFormatDescriptions count] > 0)
formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
CGSize trackDimensions = {
.width = 0.0,
.height = 0.0,
};
// If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
//如果视频轨道具有格式说明,请从中抓取轨道尺寸。否则,直接从轨道上抓住他们。
if (formatDescription)
trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
else
trackDimensions = [assetVideoTrack naturalSize];
NSDictionary *compressionSettings = nil;
// If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
//如果视频轨道具有格式说明,请尝试抓取视频使用的干净光圈设置和像素宽高比
if (formatDescription)
{
NSDictionary *cleanAperture = nil;
NSDictionary *pixelAspectRatio = nil;
CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
if (cleanApertureFromCMFormatDescription)
{
cleanAperture = @{
AVVideoCleanApertureWidthKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
AVVideoCleanApertureHeightKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
AVVideoCleanApertureVerticalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
};
}
CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
if (pixelAspectRatioFromCMFormatDescription)
{
pixelAspectRatio = @{
AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
AVVideoPixelAspectRatioVerticalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
};
}
// Add whichever settings we could grab from the format description to the compression settings dictionary.
//将我们可以从格式描述中获取的任何设置添加到压缩设置字典中。
if (cleanAperture || pixelAspectRatio)
{
NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
if (cleanAperture)
[mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
if (pixelAspectRatio)
[mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
compressionSettings = mutableCompressionSettings;
}
}
// Create the video settings dictionary for H.264.
//为H.264创建视频设置字典。
NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : [NSNumber numberWithDouble:trackDimensions.width],
AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
};
// Put the compression settings into the video settings dictionary if we were able to grab them.
//如果可以的话,把压缩设置放到视频设置字典中。
if (compressionSettings)
[videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
// Create the asset writer input and add it to the asset writer.
//创建 资产写入器输入 并将其添加到资产写入器中。
self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[videoTrack mediaType]
outputSettings:videoSettings];
[self.assetWriter addInput:self.assetWriterVideoInput];
}
}
return success;
}
Reencoding the Asset - 重新编码资产
Provided that the asset reader and writer are successfully initialized and configured, the startAssetReaderAndWriter: method described in Handling the Initial Setup is called. This method is where the actual reading and writing of the asset takes place.
如果资产读取器和写入器成功地初始化和配置,在 Handling the Initial Setup 中发现调用 startAssetReaderAndWriter:
方法。这个方法实际上是资产读写发生的地方。
- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
BOOL success = YES;
// Attempt to start the asset reader.
//尝试启动资产读取器。
success = [self.assetReader startReading];
if (!success)
*outError = [self.assetReader error];
if (success)
{
// If the reader started successfully, attempt to start the asset writer.
//如果读取器成功启动,请尝试启动资产写入器。
success = [self.assetWriter startWriting];
if (!success)
*outError = [self.assetWriter error];
}
if (success)
{
// If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
//如果资产读取器和写入器都已成功启动,则在重新编码的地方创建调度组,并开始样本写入会话。
self.dispatchGroup = dispatch_group_create();
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
self.audioFinished = NO;
self.videoFinished = NO;
if (self.assetWriterAudioInput)
{
// If there is audio to reencode, enter the dispatch group before beginning the work.
//如果有音频需要重新编码,请在开始工作前输入派遣组。
dispatch_group_enter(self.dispatchGroup);
// Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
//指定要在资产写入器准备接收音频媒体数据时执行的block,并指定要调用它的队列。
[self.assetWriterAudioInput requestMediaDataWhenReadyOnQueue:self.rwAudioSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
//因为block是异步调用的,所以要检查它的任务是否完成。
if (self.audioFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data.
//如果任务还没有完成,请确保输入已经为更多的媒体数据做好了准备。
while ([self.assetWriterAudioInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next audio sample buffer, and append it to the output file.
//获取下一个音频示例缓冲区,并将其附加到输出文件。
CMSampleBufferRef sampleBuffer = [self.assetReaderAudioOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
BOOL success = [self.assetWriterAudioInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
if (completedOrFailed)
{
// Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
//将输入标记为已完成,但前提是尚未完成,然后离开调度组(因为音频工作已完成)。
BOOL oldFinished = self.audioFinished;
self.audioFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterAudioInput markAsFinished];
}
dispatch_group_leave(self.dispatchGroup);
}
}];
}
if (self.assetWriterVideoInput)
{
// If we had video to reencode, enter the dispatch group before beginning the work.
//如果我们有要重新编码的视频,请在开始工作之前输入调度组。
dispatch_group_enter(self.dispatchGroup);
// Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
//指定要在资产写入器准备好视频媒体数据时要执行的block,并指定队列调用它。
[self.assetWriterVideoInput requestMediaDataWhenReadyOnQueue:self.rwVideoSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
//因为这个block是异步调用的,请检查它的任务是否完成。
if (self.videoFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data.
//如果任务还没有完成,请确保输入已经为更多的媒体数据做好了准备。
while ([self.assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next video sample buffer, and append it to the output file.
//获取下一个视频采样缓冲区,并将其附加到输出文件。
CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
BOOL success = [self.assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
if (completedOrFailed)
{
// Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
//将输入标记为已完成,但前提是我们尚未完成,然后离开调度组(因为视频工作已完成)。
BOOL oldFinished = self.videoFinished;
self.videoFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterVideoInput markAsFinished];
}
dispatch_group_leave(self.dispatchGroup);
}
}];
}
// Set up the notification that the dispatch group will send when the audio and video work have both finished.
//设置当音频和视频作品都完成时调度组将发送的通知。
dispatch_group_notify(self.dispatchGroup, self.mainSerializationQueue, ^{
BOOL finalSuccess = YES;
NSError *finalError = nil;
// Check to see if the work has finished due to cancellation.
//检查工作是否由于取消而结束。
if (self.cancelled)
{
// If so, cancel the reader and writer.
//如果取消了,就取消读取器和写入器。
[self.assetReader cancelReading];
[self.assetWriter cancelWriting];
}
else
{
// If cancellation didn't occur, first make sure that the asset reader didn't fail.
//如果没有取消,首先要确保资产读取器没有失败。
if ([self.assetReader status] == AVAssetReaderStatusFailed)
{
finalSuccess = NO;
finalError = [self.assetReader error];
}
// If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
//如果资产读取器没有失败,请尝试停止资产写入器并检查是否有错误。
if (finalSuccess)
{
finalSuccess = [self.assetWriter finishWriting];
if (!finalSuccess)
finalError = [self.assetWriter error];
}
}
// Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.
//调用方法来处理完成,并传入适当的参数以指示重新编码是否成功。
[self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
});
}
// Return success here to indicate whether the asset reader and writer were started successfully.
//在这里返回成功,以指示资产读取器和写入器是否已成功启动。
return success;
}
During reencoding, the audio and video tracks are asynchronously handled on individual serialization queues to increase the overall performance of the process, but both queues are contained within the same dispatch group. By placing the work for each track within the same dispatch group, the group can send a notification when all of the work is done and the success of the reencoding process can be determined.
重新编码期间,音频和视频轨道是在各自的串行队形上异步处理,来增加进程的整体性能,但两个队列包含在同一调度组中。为同一调度组内的每个轨道安排工作,当所有的工作完成,并能够确定重新编码过程的成功,该组可以发送一个通知。
-
Handling Completion - 处理完成
To handle the completion of the reading and writing process, the readingAndWritingDidFinishSuccessfully: method is called—with parameters indicating whether or not the reencoding completed successfully. If the process didn’t finish successfully, the asset reader and writer are both canceled and any UI related tasks are dispatched to the main queue.
处理读写进程的完成,readingAndWritingDidFinishSuccessfully:
方法被调用,带着参数,指出重新编码是否成功完成。如果进程没有成功完成,该资产读取器和写入器都被取消,任何 UI
相关的任何都被发送到主队列中。
- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
if (!success)
{
// If the reencoding process failed, we need to cancel the asset reader and writer.
//如果重新编码过程失败,我们需要取消资产读取器和写入器。
[self.assetReader cancelReading];
[self.assetWriter cancelWriting];
dispatch_async(dispatch_get_main_queue(), ^{
// Handle any UI tasks here related to failure.
//处理任何与失败相关的UI任务。
});
}
else
{
// Reencoding was successful, reset booleans.
//重新编码成功,重启boolean值。
self.cancelled = NO;
self.videoFinished = NO;
self.audioFinished = NO;
dispatch_async(dispatch_get_main_queue(), ^{
// Handle any UI tasks here related to success.
//处理与成功相关的任何UI任务。
});
}
}
Using multiple serialization queues, you can allow the user of your app to cancel the reencoding process with ease. On the main serialization queue, messages are asynchronously sent to each of the asset reencoding serialization queues to cancel their reading and writing. When these two serialization queues complete their cancellation, the dispatch group sends a notification to the main serialization queue where the cancelled property is set to YES. You might associate the cancel method from the following code listing with a button on your UI.
使用多个序列化队列,你可以提供方便,让你的应用程序的用户取消重新编码进程。在主串行队列,消息被异步发送到每个资产重编码序列化队列,来取消它们的读写。当这两个序列化队列完成它们的注销,调度组向主序列化队列(cancelled
属性被设置为 YES
)发送一个通知.你可能从下面的代码将 cancel
方法与 UI
上的按钮关联起来。
- (void)cancel
{
// Handle cancellation asynchronously, but serialize it with the main queue.
//异步处理取消操作,但使用主队列序列化取消操作。
dispatch_async(self.mainSerializationQueue, ^{
// If we had audio data to reencode, we need to cancel the audio work.
//如果我们有音频数据要重新编码,我们需要取消音频工作。
if (self.assetWriterAudioInput)
{
// Handle cancellation asynchronously again, but this time serialize it with the audio queue.
//再次异步处理取消,但是这次使用音频队列将其序列化。
dispatch_async(self.rwAudioSerializationQueue, ^{
// Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
//更新指示任务已完成的Boolean属性,并将输入标记为已完成(如果尚未标记为已完成)。
BOOL oldFinished = self.audioFinished;
self.audioFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterAudioInput markAsFinished];
}
// Leave the dispatch group since the audio work is finished now.
//离开调度组,因为音频工作已经完成。
dispatch_group_leave(self.dispatchGroup);
});
}
if (self.assetWriterVideoInput)
{
// Handle cancellation asynchronously again, but this time serialize it with the video queue.
//再次异步处理取消,但这一次使用视频队列序列化取消。
dispatch_async(self.rwVideoSerializationQueue, ^{
// Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
//更新指示任务已完成的Boolean属性,并将输入标记为已完成(如果尚未标记为已完成)。
BOOL oldFinished = self.videoFinished;
self.videoFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterVideoInput markAsFinished];
}
// Leave the dispatch group, since the video work is finished now.
//离开调度组,因为视频工作已经完成。
dispatch_group_leave(self.dispatchGroup);
});
}
// Set the cancelled Boolean property to YES to cancel any work on the main queue as well.
//将取消的Boolean属性设置为YES以取消主队列上的任何工作。
self.cancelled = YES;
});
}
Asset Output Settings Assistant - 资产出口设置助手
The AVOutputSettingsAssistant class aids in creating output-settings dictionaries for an asset reader or writer. This makes setup much simpler, especially for high frame rate H264 movies that have a number of specific presets. Listing 5-1 shows an example that uses the output settings assistant to use the settings assistant.
AVOutputSettingsAssistant 类在创建出口时能帮上忙 – 为资产读取器或者写入器设置字典。这使得设置更简单,特别是对于有一些具体的预设的高帧速率 H264
影片。 Listing 5-1
显示了使用输出设置助手去使用设置助手的例子。
Listing 5-1 AVOutputSettingsAssistant sample
AVOutputSettingsAssistant *outputSettingsAssistant = [AVOutputSettingsAssistant outputSettingsAssistantWithPreset:<some preset>];
CMFormatDescriptionRef audioFormat = [self getAudioFormat];
if (audioFormat != NULL)
[outputSettingsAssistant setSourceAudioFormat:(CMAudioFormatDescriptionRef)audioFormat];
CMFormatDescriptionRef videoFormat = [self getVideoFormat];
if (videoFormat != NULL)
[outputSettingsAssistant setSourceVideoFormat:(CMVideoFormatDescriptionRef)videoFormat];
CMTime assetMinVideoFrameDuration = [self getMinFrameDuration];
CMTime averageFrameDuration = [self getAvgFrameDuration]
[outputSettingsAssistant setSourceVideoAverageFrameDuration:averageFrameDuration];
[outputSettingsAssistant setSourceVideoMinFrameDuration:assetMinVideoFrameDuration];
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:<some URL> fileType:[outputSettingsAssistant outputFileType] error:NULL];
AVAssetWriterInput *audioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:[outputSettingsAssistant audioSettings] sourceFormatHint:audioFormat];
AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:[outputSettingsAssistant videoSettings] sourceFormatHint:videoFormat];