iOS-GCD原理分析(三)

前言

iOS-GCD原理分析(一)iOS-GCD原理分析(二)介绍了GCD的函数与队列,同步函数,异步函数,同步函数死锁,GCD单列底层原理,接下来我们再介绍下栅栏函数,信号量,调度组的用法。

1 栅栏函数

1.1栅栏函数的应用

栅栏函数的作用就是控制当前任务的执行导致同步的效果
dispatch_barrier_async前面的任务执行完毕才会来到这里
dispatch_barrier_sync作用dispatch_barrier_async是一样的,但是这个会堵塞线程,影响后面的任务执行
栅栏函数只能控制同一并发队列
例如:当我们有一个网络请求回来这后,才可以去请求另一个网络请求,这个时候就可以使用栅栏函数来解决。

    dispatch_queue_t concurrentQueue = dispatch_queue_create("robert", DISPATCH_QUEUE_CONCURRENT);
    /** 1 异步函数*/
    dispatch_async(concurrentQueue, ^{
        sleep(2);
        NSLog(@"1");
    });
    /** 2 异步函数*/
    dispatch_async(concurrentQueue, ^{
        sleep(2);
        NSLog(@"2");
    });

    /* 3. 栅栏函数 */
    dispatch_barrier_async(concurrentQueue, ^{
        NSLog(@"3:%@",[NSThread currentThread]);
    });
    /* 4. 异步函数 */
    dispatch_async(concurrentQueue, ^{
        NSLog(@"4");
    });
    // 5
    NSLog(@"5:走你!!");

我们来看下这段代码的执行流程,如图:


2

它的顺序是5->2->1->3->4。
dispatch_barrier_async是异步,不会阻塞线程,所以先走了5,2,1任务执行完毕后,4任务才会执行,所以阻塞了前面的任务。
我们把dispatch_barrier_async换成dispatch_barrier_sync看下效果,如图:

3

它的顺序是2->1->3->5->4
dispatch_barrier_sync是同步,阻塞了线程,等2,1任务执行完毕,4任务是异步,不会阻塞,所以先打印了5,后打印了4,所以dispatch_barrier_sync是会阻塞线程。
dispatch_barrier_sync,dispatch_barrier_async现在都是在同一队列的情况,我们再看下不在同一队列的情况是怎么样的
dispatch_barrier_async先看这个,如图:
4

它的执行顺序是5->3->4->1->2,发现并没有阻塞1,2两个任务。
dispatch_barrier_sync再来看这个,如图:
5

它的执行顺序是2->5->4->3->1
同样是没有阻塞2,1任务的。
栅栏阻塞的是队列,对同一个队列才有效果
下面我们再来看这一种情况,代码如下:

 dispatch_queue_t concurrentQueue = dispatch_get_global_queue(0, 0);
    /** 1 异步函数*/
    dispatch_async(concurrentQueue, ^{
        sleep(2);
        NSLog(@"1");
    });
    /** 2 异步函数*/
    dispatch_sync(concurrentQueue, ^{
        sleep(2);
        NSLog(@"2");
    });

    /* 3. 栅栏函数 */
    dispatch_barrier_async(concurrentQueue, ^{
        NSLog(@"3:%@",[NSThread currentThread]);
    });
    /* 4. 异步函数 */
    dispatch_async(concurrentQueue, ^{
        NSLog(@"4");
    });
    // 5
    NSLog(@"5:走你!!");

这段代码的执行顺序,如图:

6

它的执行顺序是2->5->4->3->1,发现栅栏函数同样没有阻塞住前面的2,1两个任务,我们使用的是dispatch_get_global_queue全局并发队列。这个说明,栅栏函数是无法阻塞全局并发队列的,只能阻塞自己创建的并发队列。
栅栏函数可以实现线程的的同步,控制线程安全问题

1.2栅栏函数的底层原理

栅栏函数是无法阻塞全局并发队列的,只能阻塞自己创建的并发队列,我们分析下底层源码。
我们在libdispatch源码中搜下dispatch_barrier_sync(dis这个关键字,如下:

void
dispatch_barrier_sync(dispatch_queue_t dq, dispatch_block_t work)
{
    uintptr_t dc_flags = DC_FLAG_BARRIER | DC_FLAG_BLOCK;
    if (unlikely(_dispatch_block_has_private_data(work))) {
        return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
    }
    _dispatch_barrier_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}

接着会调用_dispatch_barrier_sync_f这个函数,这里跟同步函数很像,如下:

static void
_dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,
        dispatch_function_t func, uintptr_t dc_flags)
{
    _dispatch_barrier_sync_f_inline(dq, ctxt, func, dc_flags);
}

我们再来看下_dispatch_barrier_sync_f_inline这个函数,

static inline void
_dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt,
        dispatch_function_t func, uintptr_t dc_flags)
{
    dispatch_tid tid = _dispatch_tid_self();

    if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {
        DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");
    }

    dispatch_lane_t dl = upcast(dq)._dl;
    // The more correct thing to do would be to merge the qos of the thread
    // that just acquired the barrier lock into the queue state.
    //
    // However this is too expensive for the fast path, so skip doing it.
    // The chosen tradeoff is that if an enqueue on a lower priority thread
    // contends with this fast path, this thread may receive a useless override.
    //
    // Global concurrent queues and queues bound to non-dispatch threads
    // always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
    if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(dl, tid))) {
        return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl,
                DC_FLAG_BARRIER | dc_flags);
    }

    if (unlikely(dl->do_targetq->do_targetq)) {
        return _dispatch_sync_recurse(dl, ctxt, func,
                DC_FLAG_BARRIER | dc_flags);
    }
    _dispatch_introspection_sync_begin(dl);
    _dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func
            DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
                    dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));
}

而这段代码:

    if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(dl, tid))) {
        return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl,
                DC_FLAG_BARRIER | dc_flags);
    }

    if (unlikely(dl->do_targetq->do_targetq)) {
        return _dispatch_sync_recurse(dl, ctxt, func,
                DC_FLAG_BARRIER | dc_flags);
    }
    _dispatch_introspection_sync_begin(dl);
    _dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func
            DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
                    dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));

说明栅栏函数也会出现死锁现象,这里的_dispatch_sync_f_slow这个函数有对死锁的判断处理,
我们看来_dispatch_sync_recurse这个函数,如下:

static void
_dispatch_sync_recurse(dispatch_lane_t dq, void *ctxt,
        dispatch_function_t func, uintptr_t dc_flags)
{
    dispatch_tid tid = _dispatch_tid_self();
    dispatch_queue_t tq = dq->do_targetq;

    do {
        if (likely(tq->dq_width == 1)) {
            if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(tq, tid))) {
                return _dispatch_sync_f_slow(dq, ctxt, func, dc_flags, tq,
                        DC_FLAG_BARRIER);
            }
        } else {
            dispatch_queue_concurrent_t dl = upcast(tq)._dl;
            if (unlikely(!_dispatch_queue_try_reserve_sync_width(dl))) {
                return _dispatch_sync_f_slow(dq, ctxt, func, dc_flags, tq, 0);
            }
        }
        tq = tq->do_targetq;
    } while (unlikely(tq->do_targetq));

    _dispatch_introspection_sync_begin(dq);
    _dispatch_sync_invoke_and_complete_recurse(dq, ctxt, func, dc_flags
            DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
                    dq, ctxt, func, dc_flags)));
}

这里面的这段:

    do {
        if (likely(tq->dq_width == 1)) {
            if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(tq, tid))) {
                return _dispatch_sync_f_slow(dq, ctxt, func, dc_flags, tq,
                        DC_FLAG_BARRIER);
            }
        } else {
            dispatch_queue_concurrent_t dl = upcast(tq)._dl;
            if (unlikely(!_dispatch_queue_try_reserve_sync_width(dl))) {
                return _dispatch_sync_f_slow(dq, ctxt, func, dc_flags, tq, 0);
            }
        }
        tq = tq->do_targetq;
    } while (unlikely(tq->do_targetq));

这里进来了do while循环,if (likely(tq->dq_width == 1))这里判断是同步还是异步,他们都调用了_dispatch_sync_f_slow这个函数。
_dispatch_sync_recurse这个函数的do while不断的循环判断
unlikely(tq->do_targetq)也就是说,要判断队列前面的作务要全部清空掉,才会往下执行
我们看下_dispatch_sync_invoke_and_complete_recurse这个函数的代码,如下:

static void
_dispatch_sync_invoke_and_complete_recurse(dispatch_queue_class_t dq,
        void *ctxt, dispatch_function_t func, uintptr_t dc_flags
        DISPATCH_TRACE_ARG(void *dc))
{
    _dispatch_sync_function_invoke_inline(dq, ctxt, func);
    _dispatch_trace_item_complete(dc);
    _dispatch_sync_complete_recurse(dq._dq, NULL, dc_flags);
}

这个_dispatch_sync_complete_recurse函数代码如下:

static void
_dispatch_sync_complete_recurse(dispatch_queue_t dq, dispatch_queue_t stop_dq,
        uintptr_t dc_flags)
{
    bool barrier = (dc_flags & DC_FLAG_BARRIER);
    do {
        if (dq == stop_dq) return;
        if (barrier) {
            dx_wakeup(dq, 0, DISPATCH_WAKEUP_BARRIER_COMPLETE);
        } else {
            _dispatch_lane_non_barrier_complete(upcast(dq)._dl, 0);
        }
        dq = dq->do_targetq;
        barrier = (dq->dq_width == 1);
    } while (unlikely(dq->do_targetq));
}

这里会判断是否存在barrier,如果存在,调用dx_wakeup,把前面的任务唤醒执行,之后才会调用_dispatch_lane_non_barrier_complete完成,并且没有栅栏函数了,它的代码如下:

static void
_dispatch_lane_non_barrier_complete(dispatch_lane_t dq,
        dispatch_wakeup_flags_t flags)
{
    uint64_t old_state, new_state, owner_self = _dispatch_lock_value_for_self();

    // see _dispatch_lane_resume()
    os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, relaxed, {
        new_state = old_state - DISPATCH_QUEUE_WIDTH_INTERVAL;
        if (unlikely(_dq_state_drain_locked(old_state))) {
            // make drain_try_unlock() fail and reconsider whether there's
            // enough width now for a new item
            new_state |= DISPATCH_QUEUE_DIRTY;
        } else if (likely(_dq_state_is_runnable(new_state))) {
            new_state = _dispatch_lane_non_barrier_complete_try_lock(dq,
                    old_state, new_state, owner_self);
        }
    });

    _dispatch_lane_non_barrier_complete_finish(dq, flags, old_state, new_state);
}

_dispatch_lane_non_barrier_complete_finish这里的这个函数完成,修复状态。
如果没有执行完成,一直调用dx_wakeup,它是一个宏定义的封装,其实是dq_wakeup,我们来搜下这个,如下:

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_serial, lane,
    .do_type        = DISPATCH_QUEUE_SERIAL_TYPE,
    .do_dispose     = _dispatch_lane_dispose,
    .do_debug       = _dispatch_queue_debug,
    .do_invoke      = _dispatch_lane_invoke,

    .dq_activate    = _dispatch_lane_activate,
    .dq_wakeup      = _dispatch_lane_wakeup,
    .dq_push        = _dispatch_lane_push,
);

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_concurrent, lane,
    .do_type        = DISPATCH_QUEUE_CONCURRENT_TYPE,
    .do_dispose     = _dispatch_lane_dispose,
    .do_debug       = _dispatch_queue_debug,
    .do_invoke      = _dispatch_lane_invoke,

    .dq_activate    = _dispatch_lane_activate,
    .dq_wakeup      = _dispatch_lane_wakeup,
    .dq_push        = _dispatch_lane_concurrent_push,
);

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_global, lane,
    .do_type        = DISPATCH_QUEUE_GLOBAL_ROOT_TYPE,
    .do_dispose     = _dispatch_object_no_dispose,
    .do_debug       = _dispatch_queue_debug,
    .do_invoke      = _dispatch_object_no_invoke,

    .dq_activate    = _dispatch_queue_no_activate,
    .dq_wakeup      = _dispatch_root_queue_wakeup,
    .dq_push        = _dispatch_root_queue_push,
);

#if DISPATCH_USE_PTHREAD_ROOT_QUEUES
DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_pthread_root, lane,
    .do_type        = DISPATCH_QUEUE_PTHREAD_ROOT_TYPE,
    .do_dispose     = _dispatch_pthread_root_queue_dispose,
    .do_debug       = _dispatch_queue_debug,
    .do_invoke      = _dispatch_object_no_invoke,

    .dq_activate    = _dispatch_queue_no_activate,
    .dq_wakeup      = _dispatch_root_queue_wakeup,
    .dq_push        = _dispatch_root_queue_push,
);
#endif // DISPATCH_USE_PTHREAD_ROOT_QUEUES

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_mgr, lane,
    .do_type        = DISPATCH_QUEUE_MGR_TYPE,
    .do_dispose     = _dispatch_object_no_dispose,
    .do_debug       = _dispatch_queue_debug,
#if DISPATCH_USE_MGR_THREAD
    .do_invoke      = _dispatch_mgr_thread,
#else
    .do_invoke      = _dispatch_object_no_invoke,
#endif

    .dq_activate    = _dispatch_queue_no_activate,
    .dq_wakeup      = _dispatch_mgr_queue_wakeup,
    .dq_push        = _dispatch_mgr_queue_push,
);

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_main, lane,
    .do_type        = DISPATCH_QUEUE_MAIN_TYPE,
    .do_dispose     = _dispatch_lane_dispose,
    .do_debug       = _dispatch_queue_debug,
    .do_invoke      = _dispatch_lane_invoke,

    .dq_activate    = _dispatch_queue_no_activate,
    .dq_wakeup      = _dispatch_main_queue_wakeup,
    .dq_push        = _dispatch_main_queue_push,
);

#if DISPATCH_COCOA_COMPAT
DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_runloop, lane,
    .do_type        = DISPATCH_QUEUE_RUNLOOP_TYPE,
    .do_dispose     = _dispatch_runloop_queue_dispose,
    .do_debug       = _dispatch_queue_debug,
    .do_invoke      = _dispatch_lane_invoke,

    .dq_activate    = _dispatch_queue_no_activate,
    .dq_wakeup      = _dispatch_runloop_queue_wakeup,
    .dq_push        = _dispatch_lane_push,
);
#endif

DISPATCH_VTABLE_INSTANCE(source,
    .do_type        = DISPATCH_SOURCE_KEVENT_TYPE,
    .do_dispose     = _dispatch_source_dispose,
    .do_debug       = _dispatch_source_debug,
    .do_invoke      = _dispatch_source_invoke,

    .dq_activate    = _dispatch_source_activate,
    .dq_wakeup      = _dispatch_source_wakeup,
    .dq_push        = _dispatch_lane_push,
);

DISPATCH_VTABLE_INSTANCE(channel,
    .do_type        = DISPATCH_CHANNEL_TYPE,
    .do_dispose     = _dispatch_channel_dispose,
    .do_debug       = _dispatch_channel_debug,
    .do_invoke      = _dispatch_channel_invoke,

    .dq_activate    = _dispatch_lane_activate,
    .dq_wakeup      = _dispatch_channel_wakeup,
    .dq_push        = _dispatch_lane_push,
);

#if HAVE_MACH
DISPATCH_VTABLE_INSTANCE(mach,
    .do_type        = DISPATCH_MACH_CHANNEL_TYPE,
    .do_dispose     = _dispatch_mach_dispose,
    .do_debug       = _dispatch_mach_debug,
    .do_invoke      = _dispatch_mach_invoke,

    .dq_activate    = _dispatch_mach_activate,
    .dq_wakeup      = _dispatch_mach_wakeup,
    .dq_push        = _dispatch_lane_push,
);

这里可以发现全局并发队列走的是_dispatch_root_queue_wakeup。*
我们搜下_dispatch_root_queue_wakeup这个函数,如下:

void
_dispatch_root_queue_wakeup(dispatch_queue_global_t dq,
        DISPATCH_UNUSED dispatch_qos_t qos, dispatch_wakeup_flags_t flags)
{
    if (!(flags & DISPATCH_WAKEUP_BLOCK_WAIT)) {
        DISPATCH_INTERNAL_CRASH(dq->dq_priority,
                "Don't try to wake up or override a root queue");
    }
    if (flags & DISPATCH_WAKEUP_CONSUME_2) {
        return _dispatch_release_2_tailcall(dq);
    }
}

并发队列走的是_dispatch_lane_wakeup,代码如下:

void
_dispatch_lane_wakeup(dispatch_lane_class_t dqu, dispatch_qos_t qos,
        dispatch_wakeup_flags_t flags)
{
    dispatch_queue_wakeup_target_t target = DISPATCH_QUEUE_WAKEUP_NONE;

    if (unlikely(flags & DISPATCH_WAKEUP_BARRIER_COMPLETE)) {
        return _dispatch_lane_barrier_complete(dqu, qos, flags);
    }
    if (_dispatch_queue_class_probe(dqu)) {
        target = DISPATCH_QUEUE_WAKEUP_TARGET;
    }
    return _dispatch_queue_wakeup(dqu, qos, flags, target);
}

全局并发队列直接了_dispatch_release_2_tailcall这个函数,
而并发队列会调用_dispatch_lane_barrier_complete**会去找栅栏函数,代码如下:

static void
_dispatch_lane_barrier_complete(dispatch_lane_class_t dqu, dispatch_qos_t qos,
        dispatch_wakeup_flags_t flags)
{
    dispatch_queue_wakeup_target_t target = DISPATCH_QUEUE_WAKEUP_NONE;
    dispatch_lane_t dq = dqu._dl;

    if (dq->dq_items_tail && !DISPATCH_QUEUE_IS_SUSPENDED(dq)) {
        struct dispatch_object_s *dc = _dispatch_queue_get_head(dq);
        if (likely(dq->dq_width == 1 || _dispatch_object_is_barrier(dc))) {
            if (_dispatch_object_is_waiter(dc)) {
                return _dispatch_lane_drain_barrier_waiter(dq, dc, flags, 0);
            }
        } else if (dq->dq_width > 1 && !_dispatch_object_is_barrier(dc)) {
            return _dispatch_lane_drain_non_barriers(dq, dc, flags);
        }

        if (!(flags & DISPATCH_WAKEUP_CONSUME_2)) {
            _dispatch_retain_2(dq);
            flags |= DISPATCH_WAKEUP_CONSUME_2;
        }
        target = DISPATCH_QUEUE_WAKEUP_TARGET;
    }

    uint64_t owned = DISPATCH_QUEUE_IN_BARRIER +
            dq->dq_width * DISPATCH_QUEUE_WIDTH_INTERVAL;
    return _dispatch_lane_class_barrier_complete(dq, qos, flags, target, owned);
}
if (likely(dq->dq_width == 1 || _dispatch_object_is_barrier(dc))) {
            if (_dispatch_object_is_waiter(dc)) {
                return _dispatch_lane_drain_barrier_waiter(dq, dc, flags, 0);
            }
        } else if (dq->dq_width > 1 && !_dispatch_object_is_barrier(dc)) {
            return _dispatch_lane_drain_non_barriers(dq, dc, flags);
        }

这里的代码,判断如果是串行队列就等待,否则就会调用_dispatch_lane_drain_non_barriers这个函数,代码如下:

static void
_dispatch_lane_drain_non_barriers(dispatch_lane_t dq,
        struct dispatch_object_s *dc, dispatch_wakeup_flags_t flags)
{
    size_t owned_width = dq->dq_width;
    struct dispatch_object_s *next_dc;

    // see _dispatch_lane_drain, go in non barrier mode, and drain items

    os_atomic_and2o(dq, dq_state, ~DISPATCH_QUEUE_IN_BARRIER, release);

    do {
        if (likely(owned_width)) {
            owned_width--;
        } else if (_dispatch_object_is_waiter(dc)) {
            // sync "readers" don't observe the limit
            _dispatch_queue_reserve_sync_width(dq);
        } else if (!_dispatch_queue_try_acquire_async(dq)) {
            // no width left
            break;
        }
        next_dc = _dispatch_queue_pop_head(dq, dc);
        if (_dispatch_object_is_waiter(dc)) {
            _dispatch_non_barrier_waiter_redirect_or_wake(dq, dc);
        } else {
            _dispatch_continuation_redirect_push(dq, dc,
                    _dispatch_queue_max_qos(dq));
        }
drain_again:
        dc = next_dc;
    } while (dc && !_dispatch_object_is_barrier(dc));

    uint64_t old_state, new_state, owner_self = _dispatch_lock_value_for_self();
    uint64_t owned = owned_width * DISPATCH_QUEUE_WIDTH_INTERVAL;

    if (dc) {
        owned = _dispatch_queue_adjust_owned(dq, owned, dc);
    }

    os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, relaxed, {
        new_state  = old_state - owned;
        new_state &= ~DISPATCH_QUEUE_DRAIN_UNLOCK_MASK;
        new_state &= ~DISPATCH_QUEUE_DIRTY;

        // similar to _dispatch_lane_non_barrier_complete():
        // if by the time we get here all redirected non barrier syncs are
        // done and returned their width to the queue, we may be the last
        // chance for the next item to run/be re-driven.
        if (unlikely(dc)) {
            new_state |= DISPATCH_QUEUE_DIRTY;
            new_state = _dispatch_lane_non_barrier_complete_try_lock(dq,
                    old_state, new_state, owner_self);
        } else if (unlikely(_dq_state_is_dirty(old_state))) {
            os_atomic_rmw_loop_give_up({
                os_atomic_xor2o(dq, dq_state, DISPATCH_QUEUE_DIRTY, acquire);
                next_dc = os_atomic_load2o(dq, dq_items_head, relaxed);
                goto drain_again;
            });
        }
    });

    old_state -= owned;
    _dispatch_lane_non_barrier_complete_finish(dq, flags, old_state, new_state);
}

这里会判断是不是在执行,拔掉所有栅栏,之后再会走下面流程。
我们看下_dispatch_lane_class_barrier_complete这个函数,如下:

static void
_dispatch_lane_class_barrier_complete(dispatch_lane_t dq, dispatch_qos_t qos,
        dispatch_wakeup_flags_t flags, dispatch_queue_wakeup_target_t target,
        uint64_t owned)
{
    uint64_t old_state, new_state, enqueue;
    dispatch_queue_t tq;

    if (target == DISPATCH_QUEUE_WAKEUP_MGR) {
        tq = _dispatch_mgr_q._as_dq;
        enqueue = DISPATCH_QUEUE_ENQUEUED_ON_MGR;
    } else if (target) {
        tq = (target == DISPATCH_QUEUE_WAKEUP_TARGET) ? dq->do_targetq : target;
        enqueue = DISPATCH_QUEUE_ENQUEUED;
    } else {
        tq = NULL;
        enqueue = 0;
    }

again:
    os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, release, {
        if (unlikely(_dq_state_needs_ensure_ownership(old_state))) {
            _dispatch_event_loop_ensure_ownership((dispatch_wlh_t)dq);
            _dispatch_queue_move_to_contended_sync(dq->_as_dq);
            os_atomic_rmw_loop_give_up(goto again);
        }
        new_state  = _dq_state_merge_qos(old_state - owned, qos);
        new_state &= ~DISPATCH_QUEUE_DRAIN_UNLOCK_MASK;
        if (unlikely(_dq_state_is_suspended(old_state))) {
            if (likely(_dq_state_is_base_wlh(old_state))) {
                new_state &= ~DISPATCH_QUEUE_ENQUEUED;
            }
        } else if (enqueue) {
            if (!_dq_state_is_enqueued(old_state)) {
                new_state |= enqueue;
            }
        } else if (unlikely(_dq_state_is_dirty(old_state))) {
            os_atomic_rmw_loop_give_up({
                // just renew the drain lock with an acquire barrier, to see
                // what the enqueuer that set DIRTY has done.
                // the xor generates better assembly as DISPATCH_QUEUE_DIRTY
                // is already in a register
                os_atomic_xor2o(dq, dq_state, DISPATCH_QUEUE_DIRTY, acquire);
                flags |= DISPATCH_WAKEUP_BARRIER_COMPLETE;
                return dx_wakeup(dq, qos, flags);
            });
        } else {
            new_state &= ~DISPATCH_QUEUE_MAX_QOS_MASK;
        }
    });
    old_state -= owned;
    dispatch_assert(_dq_state_drain_locked_by_self(old_state));
    dispatch_assert(!_dq_state_is_enqueued_on_manager(old_state));

    if (_dq_state_is_enqueued(new_state)) {
        _dispatch_trace_runtime_event(sync_async_handoff, dq, 0);
    }

#if DISPATCH_USE_KEVENT_WORKLOOP
    if (_dq_state_is_base_wlh(old_state)) {
        // - Only non-"du_is_direct" sources & mach channels can be enqueued
        //   on the manager.
        //
        // - Only dispatch_source_cancel_and_wait() and
        //   dispatch_source_set_*_handler() use the barrier complete codepath,
        //   none of which are used by mach channels.
        //
        // Hence no source-ish object can both be a workloop and need to use the
        // manager at the same time.
        dispatch_assert(!_dq_state_is_enqueued_on_manager(new_state));
        if (_dq_state_is_enqueued_on_target(old_state) ||
                _dq_state_is_enqueued_on_target(new_state) ||
                !_dq_state_in_uncontended_sync(old_state)) {
            return _dispatch_event_loop_end_ownership((dispatch_wlh_t)dq,
                    old_state, new_state, flags);
        }
        _dispatch_event_loop_assert_not_owned((dispatch_wlh_t)dq);
        if (flags & DISPATCH_WAKEUP_CONSUME_2) {
            return _dispatch_release_2_tailcall(dq);
        }
        return;
    }
#endif

    if (_dq_state_received_override(old_state)) {
        // Ensure that the root queue sees that this thread was overridden.
        _dispatch_set_basepri_override_qos(_dq_state_max_qos(old_state));
    }

    if (tq) {
        if (likely((old_state ^ new_state) & enqueue)) {
            dispatch_assert(_dq_state_is_enqueued(new_state));
            dispatch_assert(flags & DISPATCH_WAKEUP_CONSUME_2);
            return _dispatch_queue_push_queue(tq, dq, new_state);
        }
#if HAVE_PTHREAD_WORKQUEUE_QOS
        // <rdar://problem/27694093> when doing sync to async handoff
        // if the queue received an override we have to forecefully redrive
        // the same override so that a new stealer is enqueued because
        // the previous one may be gone already
        if (_dq_state_should_override(new_state)) {
            return _dispatch_queue_wakeup_with_override(dq, new_state, flags);
        }
#endif
    }
    if (flags & DISPATCH_WAKEUP_CONSUME_2) {
        return _dispatch_release_2_tailcall(dq);
    }
}

这里会清空栅栏函数的数据,这样栅栏函数后面的任务才会执行。
全局并发队列的_dispatch_root_queue_wakeup这个函数没有对栅栏函数的处理。
全局并发队列为会没有栅栏
全局并发队列并不只被我们使用,还有可能会被系统使用,可能还有很多任务在执行,如果使用栅栏,阻塞了,系统级别的任务无法执行,影响了整个性能。
经过分析,得到如下结论:
栅栏函数只对同一个队列可以阻塞
栅栏函数对全局并发队列无效

2 信号量

2.1 信号量的使用

信号量常用在线程之前的同步,最大并发数。
dispatch_semaphore_create创建信号
dispatch_semaphore_signal发送信号
dispatch_semaphore_wait等待信号
这是它的三个函数

 dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
    dispatch_semaphore_t sem = dispatch_semaphore_create(1);
    dispatch_queue_t queue1 = dispatch_queue_create("robert", NULL);
    dispatch_async(queue, ^{
        dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); // 等待
        sleep(2);
        NSLog(@"执行任务1");
        NSLog(@"任务1完成");
    });
    
    //任务2
    dispatch_async(queue, ^{
        sleep(1);
        NSLog(@"执行任务2");
        NSLog(@"任务2完成");
        dispatch_semaphore_signal(sem); // 发信号
    });
    
    //任务3
    dispatch_async(queue, ^{
        dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
        sleep(2);

        NSLog(@"执行任务3");
        NSLog(@"任务3完成");
        dispatch_semaphore_signal(sem);
    });


    //任务4
    dispatch_async(queue, ^{
        dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
        sleep(2);

        NSLog(@"执行任务4");
        NSLog(@"任务4完成");
        dispatch_semaphore_signal(sem);
    });

我们看下这段代码的执行结果,如图:


6

从这张图可以发现任务2执行完后,才执行的任务1,任务4,任务3。
如果我们把dispatch_semaphore_create传3,看下效果,如图:

7

这张图中可以看出1,3,4两个任务是并发的。
所以:

  • 信号量可以控制并发
  • 信号量可以控制线程同步
    那么它是怎么做到的,我们来分析下它的底层原理

2.2 信号量原理

我们先来看下这代码:

  dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
    dispatch_semaphore_t sem = dispatch_semaphore_create(0);
    dispatch_queue_t queue1 = dispatch_queue_create("robert", NULL);
    dispatch_async(queue, ^{
        dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); // 等待
        NSLog(@"执行任务1");
        NSLog(@"任务1完成");
    });
    
    //任务2
    dispatch_async(queue, ^{
        sleep(2);
        NSLog(@"执行任务2");
        NSLog(@"任务2完成");
        dispatch_semaphore_signal(sem); // 发信号
    });

从这段代码看出,按常理来讲,任务因为有sleep,它应该比任务1晚执行完毕,我们来运行下,如图:

8

从上图可以看出任务2先执行完,这是为什么呢?我们分析下。
dispatch_semaphore_wait这个函数起到了同步的效果,信号量控制为0,这里进入一个等待无法执行任务里1,dispatch_semaphore_wait以下的代码,而它是在作务2中调用了dispatch_semaphore_signal这个发送信号后才会执行,这样就达到了控制流程。
dispatch_semaphore_waitdispatch_semaphore_signal这两个函数做了什么事情才会产生上面那样的结果,我们分析下源码。
我搜下 dispatch_semaphore_signal(dis找到以下代码:

intptr_t
dispatch_semaphore_signal(dispatch_semaphore_t dsema)
{
    long value = os_atomic_inc2o(dsema, dsema_value, release);
    if (likely(value > 0)) {
        return 0;
    }
    if (unlikely(value == LONG_MIN)) {
        DISPATCH_CLIENT_CRASH(value,
                "Unbalanced call to dispatch_semaphore_signal()");
    }
    return _dispatch_semaphore_signal_slow(dsema);
}

这里的这行代码

long value = os_atomic_inc2o(dsema, dsema_value, release);

os_atomic_inc2o会在原始的信号执行++1操作。
上述代码我们传的是0,这里++1之后,就于大于0,就会返回0,我们得先弄明白信号是什么,我们来看下dispatch_semaphore_create这个函数,找到它的注释:


/*!
 * @function dispatch_semaphore_create
 *
 * @abstract
 * Creates new counting semaphore with an initial value.
 *
 * @discussion
 * Passing zero for the value is useful for when two threads need to reconcile
 * the completion of a particular event. Passing a value greater than zero is
 * useful for managing a finite pool of resources, where the pool size is equal
 * to the value.
 *
 * @param value
 * The starting value for the semaphore. Passing a value less than zero will
 * cause NULL to be returned.
 *
 * @result
 * The newly created semaphore, or NULL on failure.
 */
API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT DISPATCH_MALLOC DISPATCH_RETURNS_RETAINED DISPATCH_WARN_RESULT
DISPATCH_NOTHROW
dispatch_semaphore_t
dispatch_semaphore_create(intptr_t value);

Passing zero for the value is useful for when two threads need to reconcile
* the completion of a particular event. Passing a value greater than zero is
* useful for managing a finite pool of resources, where the pool size is equal
* to the value.

The starting value for the semaphore. Passing a value less than zero will
这里说明了大于0或者等于0的时候,才会正常的执行,小于直接返回NULL,0能够正常执行。

if (likely(value > 0)) {
        return 0;
    }

这里如果大于0就返回0,
我们先看下dispatch_semaphore_wait函数,代码如下:

intptr_t
dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
{
    long value = os_atomic_dec2o(dsema, dsema_value, acquire);
    if (likely(value >= 0)) {
        return 0;
    }
    return _dispatch_semaphore_wait_slow(dsema, timeout);
}

这里的os_atomic_dec2o函数在原始的信号进行--1操作。
再判断

if (likely(value >= 0)) {
        return 0;
    }

如果大于0就返回0,那么这个值有什么用,我们需要再深入分析下。
在这里dispatch_semaphore_create我们是从0开始的,dispatch_semaphore_wait这个函数, 我们执行--1操作,是小于0就走_dispatch_semaphore_wait_slow这里的代码,我们再下这个函的代码

static intptr_t
_dispatch_semaphore_wait_slow(dispatch_semaphore_t dsema,
        dispatch_time_t timeout)
{
    long orig;

    _dispatch_sema4_create(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
    switch (timeout) {
    default:
        if (!_dispatch_sema4_timedwait(&dsema->dsema_sema, timeout)) {
            break;
        }
        // Fall through and try to undo what the fast path did to
        // dsema->dsema_value
    case DISPATCH_TIME_NOW:
        orig = dsema->dsema_value;
        while (orig < 0) {
            if (os_atomic_cmpxchgv2o(dsema, dsema_value, orig, orig + 1,
                    &orig, relaxed)) {
                return _DSEMA4_TIMEOUT();
            }
        }
        // Another thread called semaphore_signal().
        // Fall through and drain the wakeup.
    case DISPATCH_TIME_FOREVER:
        _dispatch_sema4_wait(&dsema->dsema_sema);
        break;
    }
    return 0;
}

这里switch会跟timeout做判断
如果是DISPATCH_TIME_NOW会进行超时处理。
如果是DISPATCH_TIME_FOREVER会调用_dispatch_sema4_wait这个函数等待,我们看下这个函数

void
_dispatch_sema4_wait(_dispatch_sema4_t *sema)
{
    int ret = 0;
    do {
        ret = sem_wait(sema);
    } while (ret == -1 && errno == EINTR);
    DISPATCH_SEMAPHORE_VERIFY_RET(ret);
}

我们再看下sem_wait这个函数,没有找到它的实现,因为这是底层内核的代码,这里做了do while循环,这也是信号量的本质**
dispatch_semaphore_signal对信号的判断处理,做了++操作。
如果不大于0

iif (unlikely(value == LONG_MIN)) {
        DISPATCH_CLIENT_CRASH(value,
                "Unbalanced call to dispatch_semaphore_signal()");
    }

抛异常,也就是dispatch_semaphore_wait操作过多, 不平衡,会进入_dispatch_semaphore_signal_slow延迟等待死循环,我们再来看下代码

intptr_t
_dispatch_semaphore_signal_slow(dispatch_semaphore_t dsema)
{
    _dispatch_sema4_create(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
    _dispatch_sema4_signal(&dsema->dsema_sema, 1);
    return 1;
}

3 调度组

3.1 调度组的应用问题

dispatch_group_create 创建组
dispatch_group_notify 进组任务执行完毕通知
dispatch_group_async 进组任务
dispatch_group_wait 进组任务执行等待时间
dispatch_group_enter(group) 进组
dispatch_group_leave(group) 出组与dispatch_group_enter搭配使用
这是调度组的几个函数
我们先来看下这段代码:

dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
    dispatch_group_async(group, queue, ^{
        NSString *logoStr1 = @"https://t7.baidu.com/it/u=1956604245,3662848045&fm=193&f=GIF";
        NSData *data1 = [NSData dataWithContentsOfURL:[NSURL URLWithString:logoStr1]];
        UIImage *image1 = [UIImage imageWithData:data1];
        [self.mArray addObject:image1];
    });

    
    dispatch_async(queue, ^{
       NSString *logoStr2 = @"https://t7.baidu.com/it/u=1956604245,3662848045&fm=193&f=GIF";
        NSData *data2 = [NSData dataWithContentsOfURL:[NSURL URLWithString:logoStr2]];
        UIImage *image2 = [UIImage imageWithData:data2];
        [self.mArray addObject:image2];
    });
    
    
    dispatch_group_notify(group, dispatch_get_main_queue(), ^{
       UIImage *newImage = nil;
       NSLog(@"数组个数:%ld",self.mArray.count);
       for (int i = 0; i<self.mArray.count; i++) {
           UIImage *waterImage = self.mArray[i];
           newImage =[ro_ImageTool ro_WaterImageWithWaterImage:waterImage backImage:newImage waterImageRect:CGRectMake(20, 100*(i+1), 100, 40)];
       }
        self.imageView.image = newImage;
    });

这段代码就是下载两张图片,完成后,对两张图片进行处理,就是这么一段代码。
除用这种方式以外还我们还可以用进组出组的方式,如下:

 dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
   
    dispatch_group_async(group, queue, ^{
        NSString *logoStr1 = @"https://t7.baidu.com/it/u=1956604245,3662848045&fm=193&f=GIF";
        NSData *data1 = [NSData dataWithContentsOfURL:[NSURL URLWithString:logoStr1]];
        UIImage *image1 = [UIImage imageWithData:data1];
        [self.mArray addObject:image1];
    });
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
       
       NSString *logoStr2 = @"https://t7.baidu.com/it/u=1956604245,3662848045&fm=193&f=GIF";
        NSData *data2 = [NSData dataWithContentsOfURL:[NSURL URLWithString:logoStr2]];
        UIImage *image2 = [UIImage imageWithData:data2];
        [self.mArray addObject:image2];
        dispatch_group_leave(group);
        
    });


    dispatch_group_notify(group, dispatch_get_main_queue(), ^{
       UIImage *newImage = nil;
       NSLog(@"数组个数:%ld",self.mArray.count);
       for (int i = 0; i<self.mArray.count; i++) {
           UIImage *waterImage = self.mArray[i];
           newImage =[ro_ImageTool ro_WaterImageWithWaterImage:waterImage backImage:newImage waterImageRect:CGRectMake(20, 100*(i+1), 100, 40)];
       }
        self.imageView.image = newImage;
    });

我们的调度组是如何控制的呢,这就需要我们分析底层源码了。

3.2 调度组的原理

我们先来看group的创建函数,我们在源码中搜下dispatch_group_create,找到它的代码:

dispatch_group_t
dispatch_group_create(void)
{
    return _dispatch_group_create_with_count(0);
}

我们再来看下_dispatch_group_create_with_count的代码

static inline dispatch_group_t
_dispatch_group_create_with_count(uint32_t n)
{
    dispatch_group_t dg = _dispatch_object_alloc(DISPATCH_VTABLE(group),
            sizeof(struct dispatch_group_s));
    dg->do_next = DISPATCH_OBJECT_LISTLESS;
    dg->do_targetq = _dispatch_get_default_queue(false);
    if (n) {
        os_atomic_store2o(dg, dg_bits,
                (uint32_t)-n * DISPATCH_GROUP_VALUE_INTERVAL, relaxed);
        os_atomic_store2o(dg, do_ref_cnt, 1, relaxed); // <rdar://22318411>
    }
    return dg;
}

跟信号量的创建很相似。
我们看下dispatch_group_enter() dispatch_group_leave() 这两个比较重要的函数。
我们有时候会有一种写法即创建又进入
_dispatch_group_create_and_enter
可以调用这个函数。
我们搜下dispatch_group_enter函数,如下

void
dispatch_group_enter(dispatch_group_t dg)
{
    // The value is decremented on a 32bits wide atomic so that the carry
    // for the 0 -> -1 transition is not propagated to the upper 32bits.
    uint32_t old_bits = os_atomic_sub_orig2o(dg, dg_bits,
            DISPATCH_GROUP_VALUE_INTERVAL, acquire);
    uint32_t old_value = old_bits & DISPATCH_GROUP_VALUE_MASK;
    if (unlikely(old_value == 0)) {
        _dispatch_retain(dg); // <rdar://problem/22318411>
    }
    if (unlikely(old_value == DISPATCH_GROUP_VALUE_MAX)) {
        DISPATCH_CLIENT_CRASH(old_bits,
                "Too many nested calls to dispatch_group_enter()");
    }
}

这里的os_atomic_sub_orig2o这个函数会执行--操作(跟信号量的dispatch_semaphore_wait这个函数相似),把信号量的值变为-1。

if (unlikely(old_value == 0)) {
        _dispatch_retain(dg); // <rdar://problem/22318411>
    }

如果等于0执行这段代码。
我们再来看下dispatch_group_leave代码如下

void
dispatch_group_leave(dispatch_group_t dg)
{
    // The value is incremented on a 64bits wide atomic so that the carry for
    // the -1 -> 0 transition increments the generation atomically.
    uint64_t new_state, old_state = os_atomic_add_orig2o(dg, dg_state,
            DISPATCH_GROUP_VALUE_INTERVAL, release);
    uint32_t old_value = (uint32_t)(old_state & DISPATCH_GROUP_VALUE_MASK);

    if (unlikely(old_value == DISPATCH_GROUP_VALUE_1)) {
        old_state += DISPATCH_GROUP_VALUE_INTERVAL;
        do {
            new_state = old_state;
            if ((old_state & DISPATCH_GROUP_VALUE_MASK) == 0) {
                new_state &= ~DISPATCH_GROUP_HAS_WAITERS;
                new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
            } else {
                // If the group was entered again since the atomic_add above,
                // we can't clear the waiters bit anymore as we don't know for
                // which generation the waiters are for
                new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
            }
            if (old_state == new_state) break;
        } while (unlikely(!os_atomic_cmpxchgv2o(dg, dg_state,
                old_state, new_state, &old_state, relaxed)));
        return _dispatch_group_wake(dg, old_state, true);
    }

    if (unlikely(old_value == 0)) {
        DISPATCH_CLIENT_CRASH((uintptr_t)old_value,
                "Unbalanced call to dispatch_group_leave()");
    }
}

这个函数中的os_atomic_add_orig2o这个函数会执行++1操作(跟dispatch_semaphore_signal相似),使其-1变为0。
接着再做一些状态的处理, DISPATCH_GROUP_VALUE_MASK是个很大的值,
(uint32_t)(old_state & DISPATCH_GROUP_VALUE_MASK)这里&操作之后,如果等于old_state的话,那么old_state的值为-1,
如果为0报异常

if (unlikely(old_value == 0)) {
        DISPATCH_CLIENT_CRASH((uintptr_t)old_value,
                "Unbalanced call to dispatch_group_leave()");
    }

如果old_state的值为-1,就会调用_dispatch_group_wake唤醒dispatch_group_notify这个函数,我们来看下dispatch_group_notify它的代码

static inline void
_dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq,
        dispatch_continuation_t dsn)
{
    uint64_t old_state, new_state;
    dispatch_continuation_t prev;

    dsn->dc_data = dq;
    _dispatch_retain(dq);

    prev = os_mpsc_push_update_tail(os_mpsc(dg, dg_notify), dsn, do_next);
    if (os_mpsc_push_was_empty(prev)) _dispatch_retain(dg);
    os_mpsc_push_update_prev(os_mpsc(dg, dg_notify), prev, dsn, do_next);
    if (os_mpsc_push_was_empty(prev)) {
        os_atomic_rmw_loop2o(dg, dg_state, old_state, new_state, release, {
            new_state = old_state | DISPATCH_GROUP_HAS_NOTIFS;
            if ((uint32_t)old_state == 0) {
                os_atomic_rmw_loop_give_up({
                    return _dispatch_group_wake(dg, new_state, false);
                });
            }
        });
    }
}

这里old_state为0,就会_dispatch_group_wake调用这里,走同步和异步函数的流程,执行回调。
dispatch_group_async这个函数就等于dispatch_group_enterdispatch_group_leave这个函数,按照我们的理解,它肯定是封装这两个函数功能,我们来看下。
我们找下它的源码,如下:

void
dispatch_group_async(dispatch_group_t dg, dispatch_queue_t dq,
        dispatch_block_t db)
{
    dispatch_continuation_t dc = _dispatch_continuation_alloc();
    uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_GROUP_ASYNC;
    dispatch_qos_t qos;

    qos = _dispatch_continuation_init(dc, dq, db, 0, dc_flags);
    _dispatch_continuation_group_async(dg, dq, dc, qos);
}

dispatch_continuation_t dc = _dispatch_continuation_alloc();这里对任务的包装,我们再看下_dispatch_continuation_group_async这个函数

static inline void
_dispatch_continuation_group_async(dispatch_group_t dg, dispatch_queue_t dq,
        dispatch_continuation_t dc, dispatch_qos_t qos)
{
    dispatch_group_enter(dg);
    dc->dc_data = dg;
    _dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}

这里dispatch_group_enter(dg);调用的dispatch_group_enter函数,
我们再来看下_dispatch_continuation_async这个函数代码,

static inline void
_dispatch_continuation_async(dispatch_queue_class_t dqu,
        dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags)
{
#if DISPATCH_INTROSPECTION
    if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
        _dispatch_trace_item_push(dqu, dc);
    }
#else
    (void)dc_flags;
#endif
    return dx_push(dqu._dq, dc, qos);
}

这里也没有dispatch_group_leave这个函数,那么dispatch_group_leave在哪封装,按照我们理解,肯定是在callout的执行完毕,我们来看下。
dx_push会调用的dq_push,最终会调用_dispatch_client_callout这个函数,执行回调。

我们看下_dispatch_continuation_invoke_inline这个函数的代码

static inline void
_dispatch_continuation_invoke_inline(dispatch_object_t dou,
        dispatch_invoke_flags_t flags, dispatch_queue_class_t dqu)
{
    dispatch_continuation_t dc = dou._dc, dc1;
    dispatch_invoke_with_autoreleasepool(flags, {
        uintptr_t dc_flags = dc->dc_flags;
        // Add the item back to the cache before calling the function. This
        // allows the 'hot' continuation to be used for a quick callback.
        //
        // The ccache version is per-thread.
        // Therefore, the object has not been reused yet.
        // This generates better assembly.
        _dispatch_continuation_voucher_adopt(dc, dc_flags);
        if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
            _dispatch_trace_item_pop(dqu, dou);
        }
        if (dc_flags & DC_FLAG_CONSUME) {
            dc1 = _dispatch_continuation_free_cacheonly(dc);
        } else {
            dc1 = NULL;
        }
        if (unlikely(dc_flags & DC_FLAG_GROUP_ASYNC)) {
            _dispatch_continuation_with_group_invoke(dc);
        } else {
            _dispatch_client_callout(dc->dc_ctxt, dc->dc_func);
            _dispatch_trace_item_complete(dc);
        }
        if (unlikely(dc1)) {
            _dispatch_continuation_free_to_cache_limit(dc1);
        }
    });
    _dispatch_perfmon_workitem_inc();
}

这里如果是DC_FLAG_GROUP_ASYNCgroup,就调用_dispatch_continuation_with_group_invoke这个函数,我们来看它的源码

static inline void
_dispatch_continuation_with_group_invoke(dispatch_continuation_t dc)
{
    struct dispatch_object_s *dou = dc->dc_data;
    unsigned long type = dx_type(dou);
    if (type == DISPATCH_GROUP_TYPE) {
        _dispatch_client_callout(dc->dc_ctxt, dc->dc_func);
        _dispatch_trace_item_complete(dc);
        dispatch_group_leave((dispatch_group_t)dou);
    } else {
        DISPATCH_INTERNAL_CRASH(dx_type(dou), "Unexpected object type");
    }
}

在这里_dispatch_client_callout调用完后,dispatch_group_leave调用了这个出组函数。

4 dispatch_source

  • 其 CPU 负荷⾮常⼩,尽量不占⽤资源

  • 其 CPU 负荷⾮常⼩,尽量不占⽤资源
    在任⼀线程上调⽤它的的⼀个函数dispatch_source_merge_data后,会执⾏DispatchSource事先定义好的句柄(可以把句柄简单理解为⼀个block)
    这个过程叫Customevent,⽤户事件。是dispatchsource⽀持处理的⼀种事件
    句柄是⼀种指向指针的指针它指向的就是⼀个类或者结构,它和系统有很密切的关系 HINSTANCE(实例名柄),HBITMAP(位图句柄),HDC(设备表述句柄),HICON(图标句柄)
    (图标句柄)等。这当中还有⼀个通⽤的句柄,就是HANDLE

  • dispatch_source_create 创建源

  • dispatch_source_set_event_handler 设置源事件回调

  • dispatch_source_merge_data 源事件设置数据

  • dispatch_source_get_data 获取源事件数据

  • dispatch_resume 继续

  • dispatch_suspend 挂起
    这个dispatch_source一般用来timer计时器,其它用法少。
    dispatch_source不受runloop的影响,替代了异步回调函数,来处理系统相关的事件。

结语

到此为止,GCD的篇章就结束了,通过iOS-GCD原理分析(一)iOS-GCD原理分析(二)和这篇文章,我们介绍了GCD的有关知识和坑点,希望这三篇文章可以为大家带来收获,文章中有错误和遗漏的地方,敬请大家批评指正,共同进步。

附代码

dispatch_source timer 封装
还有很多不足之处,后续会继续优化,完善功能,把它做成pod,希望大家可以支持。

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,271评论 5 476
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,275评论 2 380
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,151评论 0 336
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,550评论 1 273
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,553评论 5 365
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,559评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,924评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,580评论 0 257
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,826评论 1 297
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,578评论 2 320
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,661评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,363评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,940评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,926评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,156评论 1 259
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 42,872评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,391评论 2 342

推荐阅读更多精彩内容