channel_w_reshape = tf.concat([channel_avg_reshape, channel_max_reshape], axis=1)
channel_attention = tf.reduce_sum(fc_2, axis=1, name="channel_attention_sum")
你理解错了,不是concat起来,而是分别进行一个MLP,共享权重,
还有你把fc_2在axis=1时加起来,不就是一个数了吗?
建议参考:https://www.notion.so/Tensorflow-Cookbook-6f4563d0cd7343cb9d1e60cd1698b54d
【论文解读】用于卷积神经网络的注意力机制(Attention)----CBAM: Convolutional Block Attention Module论文:CBAM: Convolutional Block Attention Module收录于:ECCV 2018 摘要 论文提出了Convolutional Block ...