I am working on coursera deep learning specialization and my code is passing all the examples but on the unit test I keep getting the answer wrong
here is the question Exercise 3 - Implement the question answering functionImplement the answer_question function. Here are the steps:Question Setup:Tokenize the given question using the provided tokenizer.Add an extra dimension to the tensor for compatibility.Pad the question tensor using pad_sequences to ensure the sequence has the specified max length. This function will truncate the sequence if it is larger or pad with zeros if it is shorter.Answer Setup:Tokenize the initial answer, noting that all answers begin with the string "answer: ".Add an extra dimension to the tensor for compatibility.Get the id of the EOS token, typically represented by 1.Generate Answer:Loop for decoder_maxlen iterations.Use the transformer_utils.next_word function, which predicts the next token in the answer using the model, input document, and the current state of the output.Concatenate the predicted next word to the output tensor.Stop Condition:The text generation stops if the model predicts the EOS token.If the EOS token is predicted, break out of the loop. here is my code
def answer_question(question, model, tokenizer, encoder_maxlen=150, decoder_maxlen=50): # Tokenize the question tokenized_question = tokenizer.tokenize(question) tokenized_question = tf.expand_dims(tokenized_question, 0) padded_question = pad_sequences(tokenized_question, maxlen=encoder_maxlen, padding='post', truncating='post') # Tokenize the initial part of the answer with "answer: " tokenized_answer = tokenizer.tokenize("answer: ") tokenized_answer = tf.expand_dims(tokenized_answer, 0) # Get the id of the EOS token eos = tokenizer.string_to_id("</s>") # Generate answer for i in range(decoder_maxlen): next_word_id = transformer_utils.next_word(padded_question, tokenized_answer, model) # Ensure next_word_id is not a scalar if not isinstance(next_word_id, tf.Tensor): raise TypeError("Expected next_word_id to be a tensor, got type: {}".format(type(next_word_id))) # Concatenate the predicted next word ID tokenized_answer = tf.concat([tokenized_answer, next_word_id], axis=-1) # Break if EOS token is predicted if eos in next_word_id.numpy(): break # Return the final tokenized answer tensor return tokenized_answer[0]
here is the error
AttributeError Traceback (most recent call last)Cell In[421], line 2 1 # UNIT TEST----> 2 w3_unittest.test_answer_question(answer_question)File /tf/w3_unittest.py:289, in test_answer_question(target) 286 modelx.load_weights('./pretrained_models/model_qa3') 288 question = "question: How many are this? context: This is five."--> 289 result = tokenizer.detokenize(target(question, modelx, tokenizer)).numpy()[0].decode() 290 cases = [] 292 t = test_case() #inps, targsAttributeError: 'int' object has no attribute 'decode'```my code passes all the examples and is getting the right answer but docent pass the unit test