Question: Exercise 3 Instructions: Implement the NER model, with the architecture discussed in the lectures. All the necessary layers are objects from the tensorflow.keras.layers library, but
Exercise
Instructions: Implement the NER model, with the architecture discussed in the lectures. All the necessary layers are objects from the tensorflow.keras.layers library, but they are already loaded in memory, so you do not have to worry about function calls.
Please utilize help function eg helptfkeras.layers.Dense for more information on a layer
tfkeras.Sequential: Combinator that applies layers serially by function composition this is not properly a layer it is under tensorflow.keras only and not under tensorflow.keras.layers It is in fact a Tensorflow model object.
You can add the layers to a Sequential layer by calling the method addlayer
You may skip the input shape and pass it in the first layer you instantiate, if necessary RNNs usually don't need to fix an input length
tfkeras.layers.Embedding: Initializes the embedding layer. An embedding layer in tensorflow will input only positive integers.
Embeddinginputdim, outputdim, maskzero False
inputdim is the expected range of integers for each tensor in the batch. Note that the inputdim is not related to array size, but to the possible range of integers expected in the input. Usually this is the vocabulary size, but it may differ by depending on further parameters. See below.
outputdim is the number of elements in the word embedding some choices for a word embedding size range from to for example Each word processed will be assigned an array of size outputdim. So if one array of shape is passed example of such an array then the Embedding layer should have output shape outputdim
maskzero is a boolean telling whether is a mask value or not. If maskzero True, then some considerations must be done: The value should be reserved as the mask value, as it will be ignored in training. You need to add in inputdim, since now Tensorflow will consider that one extra value may show up in each sentence.
tfkeras.layers.LSTM: An LSTM layer.
LSTMunits returnsequences Builds an LSTM layer with hidden state and cell sizes equal to units. The arguments you will need: units: It is the number of LSTM cells you will create to pass every input to In this case, set the units as the Embedding outputdim. This is just a choice, in fact there is no static rule preventing one from choosing any amount of LSTM units. returnsequences: A boolean, telling whether you want to return every output value from the LSTM cells. If returnsequences False, then the LSTM output shape will be batchsize, units Otherwise, it is batchsize, sentencelength, units since there will be an output for each word in the sentence.
tfkeras.layers.Dense: A dense layer.
Denseunits activation: The parameters for this layer are: units: It is the number of units chosen for this dense layer, ie it is the dimensionality of the output space. In this case, each value passed through the Dense layer must be mapped into a vector with length numofclasses in this case, lentags activation: This is the activation that will be performed after computing the values in the Dense layer. Since the Dense layer comes before the LogSoftmax step, you can pass the LogSoftmax function as activation function here. You can find the implementation for LogSoftmax under tfnn So you may call it as tfnnlogsoftmax. See its documentation here.
Instructions: You will build a function that inputs the number of tags, the vocabulary size and an optional parameter to control the embedding dimension and outputs a tensorflow model as discussed in the lectures.
Step by Step Solution
There are 3 Steps involved in it
1 Expert Approved Answer
Step: 1 Unlock
Question Has Been Solved by an Expert!
Get step-by-step solutions from verified subject matter experts
Step: 2 Unlock
Step: 3 Unlock
