Quantcast
Channel: Active questions tagged python - Stack Overflow
Viewing all articles
Browse latest Browse all 17419

the unsupervised neural network to solve a static Schrödinger equation

$
0
0

I want to reproduce the results of the paper: Neural-network-based multistate solver for a static Schrödinger equation,Hong Li, Qilong Zhai, and Jeff Z. Y. Chen, Phys. Rev. A 103, 032405 – Published 8 March 2021. This is the deep NN used in the paperenter image description here. I tried to simple harmonic oscillator, the input is the x coordinate value, the output is N+1 wave function phi_n(x), N is the main quantum number. The loss function enter image description here contains 3 parts: total energy, orthogonal normalizationenter image description here, L2 regularization enter image description here. This is my python code (considering total energy only):

import tensorflow as tfimport numpy as np# Constantshbar = tf.constant(1.0, dtype=tf.float32)m = tf.constant(1.0, dtype=tf.float32)omega = tf.constant(1.0, dtype=tf.float32)# Potential energy functiondef potential_energy(x):    return 0.5 * m * omega ** 2 * x ** 2# Kinetic energy termdef kinetic_energy_term(psi, x):    with tf.GradientTape(persistent=True) as tape:        tape.watch(x)        dpsi_dx = tape.gradient(psi, x)    d2psi_dx2 = tape.gradient(dpsi_dx, x)    del tape  # Delete the tape after using it    return -0.5 * hbar ** 2 / m * d2psi_dx2# Expected energy functiondef expected_energy(x, psi_n):    ke = kinetic_energy_term(psi_n, x)    pe = potential_energy(x)    hamiltonian_psi = ke + pe * psi_n    numerator = tf.reduce_sum(psi_n * hamiltonian_psi)    denominator = tf.reduce_sum(psi_n * psi_n)    return numerator / denominator# Custom loss functiondef unsupervised_loss(x, y_pred):    energies = [expected_energy(x, y_pred[:, i:i + 1]) for i in range(y_pred.shape[-1])]    energies=tf.convert_to_tensor(energies,dtype=tf.float32)    energy_sum = tf.reduce_sum(energies)    return energy_sum# Neural network modelN = 10  # Number of quantum statesmodel = tf.keras.Sequential([    tf.keras.layers.Dense(50, activation='sigmoid', input_shape=(1,)),    tf.keras.layers.Dense(50, activation='sigmoid'),    tf.keras.layers.Dense(N, activation='linear')])model.compile(optimizer='adam', loss=lambda y_true, y_pred: unsupervised_loss(y_true, y_pred))# Training datax_train = np.linspace(-5, 5, 1000).reshape(-1, 1)x_train_tensor = tf.convert_to_tensor(x_train, dtype=tf.float32)# %%# Train the modelhistory = model.fit(x_train_tensor, x_train_tensor, epochs=100, batch_size=32, validation_split=0.2)

error:

Traceback (most recent call last):  File "D:\software\work\python\anaconda\lib\site-packages\IPython\core\interactiveshell.py", line 3437, in run_code    exec(code_obj, self.user_global_ns, self.user_ns)  File "<ipython-input-62-c0c5a991e6f1>", line 2, in <module>    history = model.fit(x_train_tensor, x_train_tensor, epochs=100, batch_size=32, validation_split=0.2)  File "D:\software\work\python\anaconda\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler    raise e.with_traceback(filtered_tb) from None  File "C:\Users\ADMINI~1\AppData\Local\Temp\__autograph_generated_filendq0th2a.py", line 15, in tf__train_function    retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)  File "C:\Users\ADMINI~1\AppData\Local\Temp\__autograph_generated_filea5lnci17.py", line 5, in <lambda>    tf__lam = (lambda y_true, y_pred: ag__.with_function_scope((lambda lscope: ag__.converted_call(unsupervised_loss, (y_true, y_pred), None, lscope)), 'lscope', ag__.STD))  File "C:\Users\ADMINI~1\AppData\Local\Temp\__autograph_generated_filea5lnci17.py", line 5, in <lambda>    tf__lam = (lambda y_true, y_pred: ag__.with_function_scope((lambda lscope: ag__.converted_call(unsupervised_loss, (y_true, y_pred), None, lscope)), 'lscope', ag__.STD))  File "C:\Users\ADMINI~1\AppData\Local\Temp\__autograph_generated_filedrz1d0_q.py", line 10, in tf__unsupervised_loss    energies = [ag__.converted_call(ag__.ld(expected_energy), (ag__.ld(x), ag__.ld(y_pred)[:, ag__.ld(i):(ag__.ld(i) + 1)]), None, fscope) for i in ag__.converted_call(ag__.ld(range), (ag__.ld(y_pred).shape[(- 1)],), None, fscope)]  File "C:\Users\ADMINI~1\AppData\Local\Temp\__autograph_generated_filedrz1d0_q.py", line 10, in <listcomp>    energies = [ag__.converted_call(ag__.ld(expected_energy), (ag__.ld(x), ag__.ld(y_pred)[:, ag__.ld(i):(ag__.ld(i) + 1)]), None, fscope) for i in ag__.converted_call(ag__.ld(range), (ag__.ld(y_pred).shape[(- 1)],), None, fscope)]  File "C:\Users\ADMINI~1\AppData\Local\Temp\__autograph_generated_file40wmfh26.py", line 10, in tf__expected_energy    ke = ag__.converted_call(ag__.ld(kinetic_energy_term), (ag__.ld(psi_n), ag__.ld(x)), None, fscope)  File "C:\Users\ADMINI~1\AppData\Local\Temp\__autograph_generated_filemre1bw7b.py", line 13, in tf__kinetic_energy_term    d2psi_dx2 = ag__.converted_call(ag__.ld(tape).gradient, (ag__.ld(dpsi_dx), ag__.ld(x)), None, fscope)TypeError: in user code:    File "D:\software\work\python\anaconda\lib\site-packages\keras\engine\training.py", line 1160, in train_function  *        return step_function(self, iterator)    File "<ipython-input-61-b273cbda69ea>", line 46, in None  *        lambda y_true, y_pred: unsupervised_loss(y_true, y_pred)    File "<ipython-input-61-b273cbda69ea>", line 33, in unsupervised_loss  *        energies = [expected_energy(x, y_pred[:, i:i + 1]) for i in range(y_pred.shape[-1])]    File "<ipython-input-50-7a5ddb0aa6ce>", line 24, in expected_energy  *        ke = kinetic_energy_term(psi_n, x)    File "<ipython-input-50-7a5ddb0aa6ce>", line 18, in kinetic_energy_term  *        d2psi_dx2 = tape.gradient(dpsi_dx, x)    TypeError: Argument `target` should be a list or nested structure of Tensors, Variables or CompositeTensors to be differentiated, but received None.

How to fix the error?


Viewing all articles
Browse latest Browse all 17419

Latest Images

Trending Articles



Latest Images