亚洲乱码中文字幕综合,中国熟女仑乱hd,亚洲精品乱拍国产一区二区三区,一本大道卡一卡二卡三乱码全集资源,又粗又黄又硬又爽的免费视频

pytorch+lstm實(shí)現(xiàn)的pos示例

 更新時(shí)間:2020年01月14日 10:33:10   作者:say_c_box  
今天小編就為大家分享一篇pytorch+lstm實(shí)現(xiàn)的pos示例,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧

學(xué)了幾天終于大概明白pytorch怎么用了

這個(gè)是直接搬運(yùn)的官方文檔的代碼

之后會(huì)自己試著實(shí)現(xiàn)其他nlp的任務(wù)

# Author: Robert Guthrie

import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

torch.manual_seed(1)


lstm = nn.LSTM(3, 3) # Input dim is 3, output dim is 3
inputs = [autograd.Variable(torch.randn((1, 3)))
     for _ in range(5)] # make a sequence of length 5

# initialize the hidden state.
hidden = (autograd.Variable(torch.randn(1, 1, 3)),
     autograd.Variable(torch.randn((1, 1, 3))))
for i in inputs:
  # Step through the sequence one element at a time.
  # after each step, hidden contains the hidden state.
  out, hidden = lstm(i.view(1, 1, -1), hidden)

# alternatively, we can do the entire sequence all at once.
# the first value returned by LSTM is all of the hidden states throughout
# the sequence. the second is just the most recent hidden state
# (compare the last slice of "out" with "hidden" below, they are the same)
# The reason for this is that:
# "out" will give you access to all hidden states in the sequence
# "hidden" will allow you to continue the sequence and backpropagate,
# by passing it as an argument to the lstm at a later time
# Add the extra 2nd dimension
inputs = torch.cat(inputs).view(len(inputs), 1, -1)
hidden = (autograd.Variable(torch.randn(1, 1, 3)), autograd.Variable(
  torch.randn((1, 1, 3)))) # clean out hidden state
out, hidden = lstm(inputs, hidden)
#print(out)
#print(hidden)

#準(zhǔn)備數(shù)據(jù)
def prepare_sequence(seq, to_ix):
  idxs = [to_ix[w] for w in seq]
  tensor = torch.LongTensor(idxs)
  return autograd.Variable(tensor)

training_data = [
  ("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]),
  ("Everybody read that book".split(), ["NN", "V", "DET", "NN"])
]
word_to_ix = {}
for sent, tags in training_data:
  for word in sent:
    if word not in word_to_ix:
      word_to_ix[word] = len(word_to_ix)
print(word_to_ix)
tag_to_ix = {"DET": 0, "NN": 1, "V": 2}

# These will usually be more like 32 or 64 dimensional.
# We will keep them small, so we can see how the weights change as we train.
EMBEDDING_DIM = 6
HIDDEN_DIM = 6

#繼承自nn.module
class LSTMTagger(nn.Module):

  def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size):
    super(LSTMTagger, self).__init__()
    self.hidden_dim = hidden_dim

    #一個(gè)單詞數(shù)量到embedding維數(shù)的矩陣
    self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)

    #傳入兩個(gè)維度參數(shù)
    # The LSTM takes word embeddings as inputs, and outputs hidden states
    # with dimensionality hidden_dim.
    self.lstm = nn.LSTM(embedding_dim, hidden_dim)

    #線性layer從隱藏狀態(tài)空間映射到tag便簽
    # The linear layer that maps from hidden state space to tag space
    self.hidden2tag = nn.Linear(hidden_dim, tagset_size)
    self.hidden = self.init_hidden()

  def init_hidden(self):
    # Before we've done anything, we dont have any hidden state.
    # Refer to the Pytorch documentation to see exactly
    # why they have this dimensionality.
    # The axes semantics are (num_layers, minibatch_size, hidden_dim)
    return (autograd.Variable(torch.zeros(1, 1, self.hidden_dim)),
        autograd.Variable(torch.zeros(1, 1, self.hidden_dim)))

  def forward(self, sentence):
    embeds = self.word_embeddings(sentence)
    lstm_out, self.hidden = self.lstm(embeds.view(len(sentence), 1, -1), self.hidden)
    tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1))
    tag_scores = F.log_softmax(tag_space)
    return tag_scores

#embedding維度,hidden維度,詞語(yǔ)數(shù)量,標(biāo)簽數(shù)量
model = LSTMTagger(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix), len(tag_to_ix))

#optim中存了各種優(yōu)化算法
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)

# See what the scores are before training
# Note that element i,j of the output is the score for tag j for word i.
inputs = prepare_sequence(training_data[0][0], word_to_ix)
tag_scores = model(inputs)
print(tag_scores)

for epoch in range(300): # again, normally you would NOT do 300 epochs, it is toy data
  for sentence, tags in training_data:
    # Step 1. Remember that Pytorch accumulates gradients.
    # We need to clear them out before each instance
    model.zero_grad()

    # Also, we need to clear out the hidden state of the LSTM,
    # detaching it from its history on the last instance.
    model.hidden = model.init_hidden()

    # Step 2. Get our inputs ready for the network, that is, turn them into
    # Variables of word indices.
    sentence_in = prepare_sequence(sentence, word_to_ix)
    targets = prepare_sequence(tags, tag_to_ix)

    # Step 3. Run our forward pass.
    tag_scores = model(sentence_in)

    # Step 4. Compute the loss, gradients, and update the parameters by
    # calling optimizer.step()
    loss = loss_function(tag_scores, targets)
    loss.backward()
    optimizer.step()

# See what the scores are after training
inputs = prepare_sequence(training_data[0][0], word_to_ix)
tag_scores = model(inputs)
# The sentence is "the dog ate the apple". i,j corresponds to score for tag j
# for word i. The predicted tag is the maximum scoring tag.
# Here, we can see the predicted sequence below is 0 1 2 0 1
# since 0 is index of the maximum value of row 1,
# 1 is the index of maximum value of row 2, etc.
# Which is DET NOUN VERB DET NOUN, the correct sequence!
print(tag_scores)


以上這篇pytorch+lstm實(shí)現(xiàn)的pos示例就是小編分享給大家的全部?jī)?nèi)容了,希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。

相關(guān)文章

  • Python實(shí)現(xiàn)生成對(duì)角矩陣和對(duì)角塊矩陣

    Python實(shí)現(xiàn)生成對(duì)角矩陣和對(duì)角塊矩陣

    這篇文章主要為大家詳細(xì)介紹了如何利用Python實(shí)現(xiàn)生成對(duì)角矩陣和對(duì)角塊矩陣,文中的示例代碼講解詳細(xì),感興趣的小伙伴可以跟隨小編一起了解一下
    2023-04-04
  • Python圖像灰度變換及圖像數(shù)組操作

    Python圖像灰度變換及圖像數(shù)組操作

    這篇文章主要介紹了Python圖像灰度變換及圖像數(shù)組操作的相關(guān)資料,需要的朋友可以參考下
    2016-01-01
  • 在前女友婚禮上用python把婚禮現(xiàn)場(chǎng)的WIFI名稱改成了

    在前女友婚禮上用python把婚禮現(xiàn)場(chǎng)的WIFI名稱改成了

    大家好,我是Lex 喜歡欺負(fù)超人那個(gè)Lex 擅長(zhǎng)領(lǐng)域:python開(kāi)發(fā),網(wǎng)絡(luò)安全滲透,Windows域控Exchange架構(gòu) 今日重點(diǎn):python暴力拿下WiFi密碼;python拿下路由器管理頁(yè)面 代碼干貨滿滿,建議收藏+實(shí)操!有問(wèn)題及需要,請(qǐng)留言哦
    2021-08-08
  • Python使用ConfigParser模塊操作配置文件的方法

    Python使用ConfigParser模塊操作配置文件的方法

    這篇文章主要介紹了Python使用ConfigParser模塊操作配置文件的方法,結(jié)合實(shí)例形式分析了Python基于ConfigParser模塊針對(duì)配置文件的創(chuàng)建、讀取、寫(xiě)入、判斷等相關(guān)操作技巧,需要的朋友可以參考下
    2018-06-06
  • 淺談Python從全局與局部變量到裝飾器的相關(guān)知識(shí)

    淺談Python從全局與局部變量到裝飾器的相關(guān)知識(shí)

    今天給大家?guī)?lái)的是關(guān)于Python的相關(guān)知識(shí),文章圍繞著Python從全局與局部變量到裝飾器的相關(guān)知識(shí)展開(kāi),文中有非常詳細(xì)的介紹及代碼示例,需要的朋友可以參考下
    2021-06-06
  • Python中的main函數(shù)與import用法

    Python中的main函數(shù)與import用法

    這篇文章主要介紹了Python中的main函數(shù)與import用法,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。如有錯(cuò)誤或未考慮完全的地方,望不吝賜教
    2022-11-11
  • Python用棧實(shí)現(xiàn)隊(duì)列的基本操作

    Python用棧實(shí)現(xiàn)隊(duì)列的基本操作

    隊(duì)列(Queue)和棧(Stack)是常見(jiàn)的數(shù)據(jù)結(jié)構(gòu),它們?cè)谟?jì)算機(jī)科學(xué)中有著廣泛的應(yīng)用,在Python中,可以使用列表(List)來(lái)實(shí)現(xiàn)棧,但要用棧來(lái)實(shí)現(xiàn)隊(duì)列需要一些巧妙的操作,本文就給大家詳細(xì)介紹一下Python中如何用棧實(shí)現(xiàn)隊(duì)列,需要的朋友可以參考下
    2023-11-11
  • Python中range函數(shù)的基本用法完全解讀

    Python中range函數(shù)的基本用法完全解讀

    range函數(shù)大多數(shù)時(shí)常出現(xiàn)在for循環(huán)中,在for循環(huán)中可做為索引使用,下面這篇文章主要給大家介紹了關(guān)于Python中range函數(shù)的基本用法,文中通過(guò)實(shí)例代碼介紹的非常詳細(xì),需要的朋友可以參考下
    2022-01-01
  • Python使用Webargs實(shí)現(xiàn)簡(jiǎn)化Web應(yīng)用程序的參數(shù)處理

    Python使用Webargs實(shí)現(xiàn)簡(jiǎn)化Web應(yīng)用程序的參數(shù)處理

    在開(kāi)發(fā)Web應(yīng)用程序時(shí),參數(shù)處理是一個(gè)常見(jiàn)的任務(wù),Python的Webargs模塊為我們提供了一種簡(jiǎn)單而強(qiáng)大的方式來(lái)處理這些參數(shù),下面我們就來(lái)學(xué)習(xí)一下具體操作吧
    2024-02-02
  • 利用Python寫(xiě)了一個(gè)水果忍者小游戲

    利用Python寫(xiě)了一個(gè)水果忍者小游戲

    這篇文章主要介紹了利用Python寫(xiě)了一個(gè)水果忍者小游戲,
    2022-05-05

最新評(píng)論