iis服务器助手广告
返回顶部
首页 > 资讯 > 操作系统 >论文阅读---REALISE model
  • 213
分享到

论文阅读---REALISE model

论文阅读 2023-08-30 11:08:26 213人浏览 薄情痞子
摘要

REALISE model: utilizes multiple encoders to obtain the semantic ,phonetic , and graphic infORMation to distinguish the

REALISE model:

utilizes multiple encoders to obtain the semantic ,phonetic , and graphic infORMation to distinguish the similarities of Chinese characters and correct the spelling errors.
2.And then, develop a selective modality fusion module to obtain the context-aware multimodal representations.
3.Finally ,the output layer predict the probabilities of error corrections.

Encoders:

Semantic encoder:

BERT, which provides rich contextual Word representation with the unsupervised pretraining on large corpora.

from transformers import BertTokenizertokenizer = BertTokenizer.from_pretrained('bert-base-chinese')

Tokenizer是一种文本处理工具,用于将文本分解成单个单词(称为tokens)或其他类型的单位,例如标点符号和数字。在自然语言处理领域,tokenizer通常用于将句子分解为单个单词或词元,以便进行文本分析和机器学习任务。常用的tokenizer包括基于规则的tokenizer和基于机器学习的tokenizer,其中基于机器学习的tokenizer可以自动识别单词和短语的边界,并将其分解为单个tokens。

Phonetic encoder

pinyin: initial(21)+final(39)+tone(5)
hierarchical phonetic encoder :character-level encoder and sentence-level encoder

Character-level encoder

GRU:
GRU(Gate Recurrent Unit)是循环神经网络(Recurrent Neural Network, RNN)的一种。和LSTM(Long-Short Term Memory)一样,也是为了解决长期记忆和反向传播中的梯度等问题而提出来的。

GRU和LSTM在很多情况下实际表现上相差无几,那么为什么我们要使用新人GRU(2014年提出)而不是相对经受了更多考验的LSTM(1997提出)呢。
我们在我们的实验中选择GRU是因为它的实验效果与LSTM相似,但是更易于计算。

Sentence-level Encoder: obtain the contextualized phonetic representation for each Chinese characters

4-layer Transformer with the same hidden size as the semantic encoder
because independent phonetic vectors are not distinguished in order, so we add the positional embeading to each vector. +pack the vector together ->transformer layers to calculate the contextualized representation in acoustic modality.

Graphic Encoder

ResNet
three fonds correpond to the three channels of the character images whose size is set to 32*32 pixel

Selective Modality Fusion Module

Ht, Ha,Hv ==textual ,acoustic,visual
fuse information i n different modalities
selective gate unit: select how much information flow to the mixed multimodal representation.
gate values :fully-connected layer followed by a sigmoid function.

Acoustic and Visual Pretraining

aims to learn the acoustic-textual and visual-textual relationships
phonetic encoder:input method pretraining objective
graphhic encoder:OCP pretraining objective

Data and Metrics

data:SIGHAN —>convert to simplified chinese by using the OPENCC tools

two level :detection and correction level to test the model

来源地址:https://blog.csdn.net/qq_48566899/article/details/132560529

--结束END--

本文标题: 论文阅读---REALISE model

本文链接: https://www.lsjlt.com/news/382683.html(转载时请注明来源链接)

有问题或投稿请发送至: 邮箱/279061341@qq.com    QQ/279061341

本篇文章演示代码以及资料文档资料下载

下载Word文档到电脑,方便收藏和打印~

下载Word文档
猜你喜欢
软考高级职称资格查询
编程网,编程工程师的家园,是目前国内优秀的开源技术社区之一,形成了由开源软件库、代码分享、资讯、协作翻译、讨论区和博客等几大频道内容,为IT开发者提供了一个发现、使用、并交流开源技术的平台。
  • 官方手机版

  • 微信公众号

  • 商务合作