自然语言处理(NLP)

Siri工作流程: 1. 听 2. 懂 3.思考 4. 组织语言 5.回答

  1. 语音识别
  2. 自然语言处理 - 语义分析
  3. 业务逻辑分析 - 结合场景 上下文
  4. 自然语言处理 - 分析结果生成自然语言文本
  5. 语音合成
自然语言处理

自然语言处理的常用处理过程:

先针对训练文本进行分词处理(词干提取, 原型提取), 统计词频, 通过词频-逆文档频率算法获得该词对整个样本语义的贡献, 根据每个词对语义的贡献力度, 构建有监督分类学习模型. 把测试样本交给模型处理, 得到测试样本的语义类别.

自然语言处理工具包 - nltk

文本分词
import nltk.tokenize as tk
# 把一段文本拆分句子
sent_list = tk.sent_tokenize(text)
# 把一句话拆分单词
word_list = tk.word_tokenize(sent)
# 通过文字标点分词器 拆分单词
punctTokenizer = tk.WordPunctTokenizer()
word_list = punctTokenizer.tokenize(text)
"""
demo02_tokenize.py  分词器
"""
import nltk.tokenize as tk
import nltk
doc = "Are you curious about tokenization? \
	Let's see how it works! \
	We neek to analyze a couple of sentences \
	with punctuations to see it in action."
# print(doc)

nltk.download('punkt')
sent_list = tk.sent_tokenize(doc)
for i, sent in enumerate(sent_list):
	print('%2d' % (i+1), sent) 

word_list = tk.word_tokenize(doc)
for i, word in enumerate(word_list):
	print('%2d' % (i+1), word) 

tokenizer = tk.WordPunctTokenizer()
word_list = tokenizer.tokenize(doc)
for i, word in enumerate(word_list):
	print('%2d' % (i+1), word) 

下面是分词器实现的分词效果:

1 Are you curious about tokenization?
 2 Let's see how it works!
 3 We neek to analyze a couple of sentences     with punctuations to see it in action.
 1 Are
 2 you
 3 curious
 4 about
 5 tokenization
 6 ?
 7 Let
 8 's
 9 see
10 how
11 it
12 works
13 !
14 We
15 neek
16 to
17 analyze
18 a
19 couple
20 of
21 sentences
22 with
23 punctuations
24 to
25 see
26 it
27 in
28 action
29 .
 1 Are
 2 you
 3 curious
 4 about
 5 tokenization
 6 ?
 7 Let
 8 '
 9 s
10 see
11 how
12 it
13 works
14 !
15 We
16 neek
17 to
18 analyze
19 a
20 couple
21 of
22 sentences
23 with
24 punctuations
25 to
26 see
27 it
28 in
29 action
30 .
Logo

CSDN联合极客时间,共同打造面向开发者的精品内容学习社区,助力成长!

更多推荐