240 发简信
IP属地:浙江
  • BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 论文笔记

    主要结构仍是 Transformer Input:a. WordPiece embeddingsb. learned positional embeddings, up...

  • Linguistically-Informed Self-Attention for Semantic Role Labeling 论文笔记

    jointly predict parts of speech and predicatesparts of speech 词性标注predicates 谓语标注,是Sema...

  • Character-Level Language Modeling with Deeper Self-Attention 论文笔记

    1, Self-Attention,用了Transformer architecture 2, Deep, 64个Transformer layers 3, 加Auxilia...

  • Win10和Ubuntu16双系统,以及CUDA安装

    Win10和Ubuntu16双系统 1,正常安装Win10。 2,进入 此电脑--管理--磁盘管理,压缩卷或者删除卷,为ubuntu预留磁盘空间。 3,UltraISO制作U...