Quantcast
Channel: Active questions tagged python - Stack Overflow
Viewing all articles
Browse latest Browse all 23218

How can I solve RecursionError while visualizing my model?

$
0
0

I'm trying to visualize the importance scores of the data with the following codes in Google Colab and the following error keeps occurring. I tried to adjust the system recursion limit but the same error keeps on occurring. Can anybody help?

RecursionError
Traceback (most recent call last)
in <cell line: 1>()
----> 1 outFileName(model, tis_testX, 14, 'Final_sal_tis.txt') # 3rd argument is for layer index
2 normalize('Final_sal_tis.txt', 'Final_sal_tis_norm.txt')
3
4 LogoFile('Final_sal_tis_norm.txt', 'TIS_Final')

8 frames
... last 4 frames repeated, from the frame below ...

/usr/local/lib/python3.10/dist-packages/keras/layers/core/tf_op_layer.py in handle(self, op, args, kwargs)
117 for x in tf.nest.flatten([args, kwargs])
118 ):
--> 119 return TFOpLambda(op)(*args, **kwargs)
120 else:
121 return self.NOT_SUPPORTED

RecursionError: maximum recursion depth exceeded while getting the str of an object

The code below is the functions I've used

import tensorflow as tfimport tensorflow.keras.backend as Kimport numpy as npimport matplotlib.pyplot as pltfrom deeplift.visualization import viz_sequencefrom tensorflow.keras.applications import xceptiondef generateSaliency(model, out_layer_index):    import sys    sys.setrecursionlimit(2000)    print(sys.getrecursionlimit())    inp = model.layers[0].input    outp = model.layers[out_layer_index].output    max_outp = K.max(outp, axis=1)    saliency = tf.keras.backend.gradients(tf.keras.backend.sum(max_outp), inp)[0]    max_class = K.argmax(outp, axis=1)    return K.function([inp], [saliency, max_class])def outFileName(model, x_test, layer_index,  outFile = None):  # x3 test added    # import sys    # sys.setrecursionlimit(1500)    xt = x_test.reshape(-1, 300, 4)    # x2t = x2_test.reshape(-1, 300, 4)    # x3t = x3_test.reshape(-1, 300, 4) # added    saliency = generateSaliency(model, layer_index)([xt]) #, x2t, x3t]) # x3t added    A = [1, 0, 0, 0]    C = [0, 1, 0, 0]    G = [0, 0, 1, 0]    T = [0, 0, 0, 1]    lst_data = []    lst_sal = []    lst_norm_sal = []    outputFile = open(outFile,'w')    x_test = x_test.tolist()    # x2_test = x2_test.tolist()    # x3_test = x3_test.tolist()  # added    for nuc in (x_test):        sal = saliency[0][x_test.index(nuc)]        #import code        #code.interact(local=dict(globals(), **locals()))        for i in range(len(nuc)):            if nuc[i] == A:                lst_data.append('A')                lst_sal.append(sal[i][A.index(1)])            elif nuc[i] == C:                lst_data.append('C')                lst_sal.append(sal[i][C.index(1)])            elif nuc[i] == G:                lst_data.append('G')                lst_sal.append(sal[i][G.index(1)])            else:                lst_data.append('T')                lst_sal.append(sal[i][T.index(1)])        #print(lst_sal)        print(','.join(lst_data), file=outputFile)        print(','.join([str(element) for element in lst_sal]), file=outputFile)        lst_data = []        lst_sal = []

The code below is the script I've used

outFileName(model, tis_testX, 14, 'Final_sal_tis.txt')

From this code I'm expecting to have a text file with all the importance scores of each character from a string with the original string above and the importance scores below.


Viewing all articles
Browse latest Browse all 23218

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>