本编主要讲使用ARKit进行构建AR世界并实现图片识别、平面捕捉、人脸识别功能并在真实世界中创建虚拟场景,从而达到虚实结合,这也是AR的本质。
一、AR场景的从无到有是如何实现的
1、ARkit通过摄像头捕捉摄像机中真实世界的画面
2、通过苹果的游戏引擎(3D引擎SceneKit, 2D引擎SpriktKit)加载渲染物体模型到虚拟世界。AR的展示脱离不开游戏引擎,如果脱离了游戏引擎的渲染,ARKit跟普通相机的作用就没什么区别。
3、将物体模型放置在AR场景中(在AR世界中模型需要有大小、远近、角度等等属性才能真实的展示出虚拟物体与真实世界的相互结合,那么其中主要应用传感器追踪和坐标识别以及坐标转换)
传感器追踪:追踪现实世界动态物体的六轴(X,Y,Z)的变化,(有位移、旋转)。(注:X,Y,Z为右手坐标系,方便记忆)
位移三轴决定物体的方位和大小
旋转三轴决定物体的显示形态和区域
二、AR世界的构建
AR世界三要素:世界追踪器ARWorldTrackingConfiguration、AR场景视图ARSCNView、AR虚拟场景SCNScene。
代码:
#pragma mark - lazy load
- (SCNView *)sceneView{
if (!_sceneView) {
_sceneView = [[ARSCNView alloc] initWithFrame:CGRectMake(0, 0, [UIScreen mainScreen].bounds.size.width, [UIScreen mainScreen].bounds.size.height)];
_sceneView.delegate = self;
}
return _sceneView;
}
- (ARWorldTrackingConfiguration *)configuration{
if (!_configuration) {
_configuration = [[ARWorldTrackingConfiguration alloc] init];
}
return _configuration;
}
- (SCNScene *)scene{
if (!_scene) {
_scene = [[SCNScene alloc] init];
}
return _scene;
}
@property (nonatomic, strong) ARSCNView *sceneView;
@property (nonatomic, strong) ARWorldTrackingConfiguration *configuration;//AR世界追踪
@property (nonatomic, strong) SCNScene *scene;
/**
* 播放器对象
*/
@property (nonatomic, strong) AVPlayer *player;
@property (nonatomic, strong) SCNNode *playerParanNode;//视频播放器载体节点
- (void)viewWillAppear:(BOOL)animated{
[super viewWillAppear:animated];
[self initARSceneView];
[self startARWorldTrackingConfiguration];
}
- (void)viewDidAppear:(BOOL)animated{
[super viewDidAppear:animated];
}
- (void)viewWillDisappear:(BOOL)animated{
[super viewWillDisappear:animated];
[self.sceneView.session pause];
}
//初始化AR场景
- (void)initARSceneView{
self.sceneView.scene = self.scene;
[self.view addSubview:self.sceneView];
}
//开启AR世界追踪
- (void)startARWorldTrackingConfiguration{
switch (self.arType) {
case ARWorldTrackingConfigurationType_detectionImage:{//图片识别
if (@available(iOS 11.3, *)) {
//设置ARWorldTrackingConfiguration的detectionImage指向的目录(标示图文件夹)
//注意:文件夹中的标示图必须设置其size大小,否则不能使用
self.configuration.detectionImages = [ARReferenceImage referenceImagesInGroupNamed:@"ARDetectionImageResource" bundle:nil];
//启动AR追踪
[self.sceneView.session runWithConfiguration:self.configuration options:ARSessionRunOptionResetTracking | ARSessionRunOptionRemoveExistingAnchors];
}
break;
}
case ARWorldTrackingConfigurationType_planeDetection:{//平面识别
if (@available(iOS 11.3, *)) {
self.configuration.planeDetection = ARPlaneDetectionHorizontal; // | ARPlaneDetectionVertical;
//启动AR追踪
[self.sceneView.session runWithConfiguration:self.configuration options:ARSessionRunOptionResetTracking | ARSessionRunOptionRemoveExistingAnchors];
}
break;
}
case ARWorldTrackingConfigurationType_faceTracking:{//人脸检测
if (@available(iOS 11.3, *)) {
[self.sceneView.session runWithConfiguration:self.faceConfiguration];
}
break;
}
case ARWorldTrackingConfigurationType_faceTrackingBlendShapes:{//人脸检测 - 表情检测
if (@available(iOS 11.3, *)) {
[self.sceneView.session runWithConfiguration:self.faceConfiguration];
}
break;
}
default:{
break;
}
}
}
到此,运行工程就能创建出AR追踪器,创建ARWorldTrackingConfiguration内部会创建相机,无需我们手动打开相机。
在Demo工程中展示了4中追踪场景如下:根据不同场景配置ARWorldTrackingConfiguration的运行追踪类型(看上面case中的追中场景)
//AR世界追踪场景
typedef enum : NSUInteger {
ARWorldTrackingConfigurationType_detectionImage,//图片识别
ARWorldTrackingConfigurationType_planeDetection,//平面捕捉
ARWorldTrackingConfigurationType_faceTracking,//人脸识别
ARWorldTrackingConfigurationType_faceTrackingBlendShapes,//表情检测
} ARWorldTrackingConfigurationType;
追中到真实世界中的场景后我们继承ARSCNView的ARSCNViewDelegate代理,ARKit会回调回来追踪到的节点信息。一下开始各种识别追踪的处理:
1、图片识别+视频流播放
图片识别成功后,处理以下回调方法,在目标图片中添加视频播放器节点,并进行视频资源的播放(imageHC01和0003为我添加在工程中的目标图片和视频资源的名称,可以根据需要进行修改)
//AR世界追踪回调 - 不断更新
- (void)renderer:(id <SCNSceneRenderer>)renderer didUpdateNode:(SCNNode *)node forAnchor:(ARAnchor *)anchor {
if (self.arType == ARWorldTrackingConfigurationType_detectionImage) {//图片识别
ARImageAnchor *imageAnchor = (ARImageAnchor *)anchor;
//获取参考标示图对象
ARReferenceImage *referenceImage = imageAnchor.referenceImage;
if ([referenceImage.name isEqual:@"imageHC01"] || [referenceImage.name isEqualToString:@"0003"]) {//识别到指定标示图
//暂停并移除原先添加的节点
[self.player pause];
[self.playerParanNode removeFromParentNode];
//加载新的视频资源
[self setPlayerVideoItemWithDetectionImageName:referenceImage.name];
//在标示图上创建节点
self.playerParanNode = [SCNNode new];
SCNBox *box = [SCNBox boxWithWidth:referenceImage.physicalSize.width height:referenceImage.physicalSize.height length:0.001 chamferRadius:0];
self.playerParanNode.geometry = box;//创建一个箱子放在节点上
//将创建的子节点旋转到与图片贴合(右手坐标)
self.playerParanNode.eulerAngles = SCNVector3Make(-M_PI/2, 0, 0);
//将box的materials设置成player对象
SCNMaterial *material = [[SCNMaterial alloc] init];
material.diffuse.contents = self.player;
self.playerParanNode.geometry.materials = @[material];
//直接播放
[self.player play];
[node addChildNode:self.playerParanNode];
}
}
}
创建AVPlayer和加载资源
/**
播放器对象
@return AVPlayer
*/
-(AVPlayer *)player{
if (!_player) {
_player=[[AVPlayer alloc] init];
}
return _player;
}
//获取与识别图对应的视频路径
- (NSURL *)getPlayVideoUrl:(NSString *)videoName{
NSString * urlStr = [[NSBundle mainBundle]pathForResource:[NSString stringWithFormat:@"%@.mp4",videoName] ofType:nil];
NSURL *url=[NSURL fileURLWithPath:urlStr];
return url;
}
//设置player预播放的视频资源
- (void)setPlayerVideoItemWithDetectionImageName:(NSString *)imageName{
NSURL *videoUrl = [self getPlayVideoUrl:imageName];
self.player = [AVPlayer playerWithURL:videoUrl];
}
图片识别效果:
2、平面捕捉+模型放置
如果要实现ARKit平面捕捉功能需要设置ARWorldTrackingConfiguration的planeDetection属性。可进行水平面捕捉和垂直平面捕捉:
//平面识别
self.configuration.planeDetection = ARPlaneDetectionHorizontal; // | ARPlaneDetectionVertical;
//启动AR追踪
[self.sceneView.session runWithConfiguration:self.configuration options:ARSessionRunOptionResetTracking | ARSessionRunOptionRemoveExistingAnchors];
使用.scn格式模型,将模型放置在捕捉到的平面上。这里只是简单放置3D模型,后期可以对模型进行更多交互,包括模型动画、点击交互等操作。
捕捉到平面后,执行以下回调,操作模型
//AR世界追踪回调
- (void)renderer:(id<SCNSceneRenderer>)renderer didAddNode:(SCNNode *)node forAnchor:(ARAnchor *)anchor{
if (self.arType == ARWorldTrackingConfigurationType_planeDetection) {//平面捕捉
if ([anchor isMemberOfClass:[ARPlaneAnchor class]]) {//识别到了平面
//添加一个3D模型,ARKit只有捕捉能力,锚点只是一个空间位置
//获取捕捉到的平面锚点
ARPlaneAnchor *planeAnchor = (ARPlaneAnchor *)anchor;
// //创建一个box,3D模型(系统捕捉到的平底是不规则的,这里将其缩放)
// SCNBox *planBox = [SCNBox boxWithWidth:planeAnchor.extent.x * 0.5 height:0 length:planeAnchor.extent.x * 0.5 chamferRadius:0];
// //使用Material渲染3D模型
// planBox.firstMaterial.diffuse.contents = [UIColor redColor];
// //创建3D模型节点
// SCNNode *planeNode = [SCNNode nodeWithGeometry:planBox];
// //设置节点的中心为捕捉到的中心点
// planeNode.position = SCNVector3Make(planeAnchor.center.x, 0, planeAnchor.center.z);
//
// //将3D模型添加到捕捉到的节点上(此时如果将模型设置有颜色,就可以看到3D长方体模型)
// [node addChildNode:planeNode];
//创建3D模型场景(将自定义模型展现出来)
SCNScene *scene = [SCNScene sceneNamed:@"art.scnassets/vase/vase.scn"];
//获取模型节点
//一个场景有多个节点,所有场景有且只有一个根节点
SCNNode *modelNode = scene.rootNode.childNodes.firstObject;
//设置模型节点的位置为捕捉到平底的位置(默认为相机位置)
modelNode.position = SCNVector3Make(planeAnchor.center.x, 0, planeAnchor.center.z);
//将自定义模型节点添加到捕捉到的节点上
[node addChildNode:modelNode];
}
}
}
平面捕捉效果:
3、人脸识别+人脸贴图
ARKit 1.5新特性新增了人脸识别功能需要ios11.3或更高系统支持。人脸识别与图片识别和平底捕捉所创建的追踪器有所区别,需要创建ARFaceTrackingConfiguration
创建ARFaceTrackingConfiguration:
@property (nonatomic, strong) ARConfiguration *faceConfiguration;//人脸识别追踪
- (ARConfiguration *)faceConfiguration{
if (!_faceConfiguration) {
_faceConfiguration = [[ARFaceTrackingConfiguration alloc] init];
_faceConfiguration.lightEstimationEnabled = YES;
}
return _faceConfiguration;
}
//运行AR人脸识别追踪器
if (@available(iOS 11.3, *)) {
[self.sceneView.session runWithConfiguration:self.faceConfiguration];
}
人脸识别成功后,给人脸增加贴图。当人的表情发生变化(皱眉、张嘴等)时,需要不断更新贴图在人脸节点中的状态,以达到贴图与人脸表情同步的效果。
//人脸贴图节点
@property (nonatomic, strong) SCNNode *faceTextureMaskNode;
- (void)renderer:(id<SCNSceneRenderer>)renderer willUpdateNode:(SCNNode *)node forAnchor:(ARAnchor *)anchor{
if (self.arType == ARWorldTrackingConfigurationType_faceTracking) {
if (anchor && [anchor isKindOfClass:[ARFaceAnchor class]]) {//识别到人脸
ARFaceAnchor *faceAnchor = (ARFaceAnchor *)anchor;
if (!_faceTextureMaskNode) {
[node addChildNode:self.faceTextureMaskNode];
}
//实时更新贴图
ARSCNFaceGeometry *faceGeometry = (ARSCNFaceGeometry *)self.faceTextureMaskNode.geometry;
if (faceGeometry && [faceGeometry isKindOfClass:[ARSCNFaceGeometry class]]) {
[faceGeometry updateFromFaceGeometry:faceAnchor.geometry];
}
}
}else if (self.arType == ARWorldTrackingConfigurationType_faceTrackingBlendShapes){//表情检测
if (anchor && [anchor isKindOfClass:[ARFaceAnchor class]]) {//识别到人脸锚点
ARFaceAnchor *faceAnchor = (ARFaceAnchor *)anchor;
NSDictionary *blendShapes = faceAnchor.blendShapes;
NSNumber *browInnerUp = blendShapes[ARBlendShapeLocationBrowInnerUp];//皱眉程度
if ([browInnerUp floatValue] > 0.5) {
NSLog(@"皱眉啦............");
if (!_glassTextureNode) {
[node addChildNode:self.glassTextureNode];
}
ARSCNFaceGeometry *faceGeometry = (ARSCNFaceGeometry *)self.glassTextureNode.geometry;
if (faceGeometry && [faceGeometry isKindOfClass:[ARSCNFaceGeometry class]]) {
[faceGeometry updateFromFaceGeometry:faceAnchor.geometry];
}
}
}
}
}
/**
人脸贴图节点
@return SCNNode
*/
- (SCNNode *)faceTextureMaskNode {
if (!_faceTextureMaskNode) {
id<MTLDevice> device = self.sceneView.device;
ARSCNFaceGeometry *geometry = [ARSCNFaceGeometry faceGeometryWithDevice:device fillMesh:NO];
SCNMaterial *material = geometry.firstMaterial;
material.fillMode = SCNFillModeFill;
material.diffuse.contents = [UIImage imageNamed:@"faceTexture.jpg"];
_faceTextureMaskNode = [SCNNode nodeWithGeometry:geometry];
}
_faceTextureMaskNode.name = @"textureMask";
return _faceTextureMaskNode;
}
人脸识别与人脸贴图效果:
本文到此结束,后续将对AR场景中3D模型交互进行研究与实现。感谢您的阅读~