1

I have built a Sequential Keras model with three layers: A Gaussian Noise layer, a hidden layer, and the output layer with the same dimension as the input layer. For this, I'm using the Keras package that comes with Tensorflow 2.0.0-beta1. Thus, I'd like to get the output of the hidden layer, such that I circumvent the Gaussian Noise layer since it's only necessary in the training phase.

To achieve my goal, I followed the instructions in https://keras.io/getting-started/faq/#how-can-i-obtain-the-output-of-an-intermediate-layer, which are pretty much described in Keras, How to get the output of each layer? too.

I have tried the following example from the official Keras documentation:

from tensorflow import keras
from tensorflow.keras import backend as K

dae = keras.Sequential([
    keras.layers.GaussianNoise( 0.001, input_shape=(10,) ),
    keras.layers.Dense( 80, name="hidden", activation="relu" ),
    keras.layers.Dense( 10 )
])

optimizer = keras.optimizers.Adam()
dae.compile( loss="mse", optimizer=optimizer, metrics=["mae"] )

# Here the fitting process...
# dae.fit( · )

# Attempting to retrieve a decoder functor.
encoder = K.function([dae.input, K.learning_phase()], 
                               [dae.get_layer("hidden").output])

However, when K.learning_phase() is used to create the Keras backend functor, I get the error:

Traceback (most recent call last):
  File "/anaconda3/lib/python3.6/contextlib.py", line 99, in __exit__
    self.gen.throw(type, value, traceback)
  File "/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py", line 534, in _scratch_graph
    yield graph
  File "/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py", line 3670, in __init__
    base_graph=source_graph)
  File "/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/eager/lift_to_graph.py", line 249, in lift_to_graph
    visited_ops = set([x.op for x in sources])
  File "/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/eager/lift_to_graph.py", line 249, in <listcomp>
    visited_ops = set([x.op for x in sources])
AttributeError: 'int' object has no attribute 'op'

The code works great if I don't include K.learning_phase(), but I need to make sure that the output from my hidden layer is evaluated over an input that is not polluted with noise (i.e. in "test" mode -- not "training" mode).

I know my other option is to create a model from the original denoising autoencoder, but can anyone point me into why my approach from the officially documented functor creation fails?

3
  • It'd help if you shared your full model code - or its smallest version for a minimally-reproducible example. Also, if using tensorflow.keras.backend, make sure all your layers come from tensorflow.keras, rather than keras, for compatibility reasons Commented Sep 28, 2019 at 21:45
  • @OverLordGoldDragon I have added a simple code snippet that in my case fails when building the encoder functor. Commented Sep 28, 2019 at 21:56
  • Strange, no errors for me - are you packages up-to-date? Also, encoder won't get you the outputs - but I included a complete script in my answer that does. Let me know if it doesn't work. (Also, if not using already, I'd strongly recommend Anaconda for your python packages, as it ensures there are no conflicts) Commented Sep 28, 2019 at 22:05

1 Answer 1

2

Firstly, ensure your packages are up-to-date, as your script works fine for me. Second, encoder won't get the outputs - continuing from your snippet after # Here is the fitting process...,

x = np.random.randn(32, 10) # toy data
y = np.random.randn(32, 10) # toy labels
dae.fit(x, y) # run one iteration

encoder = K.function([dae.input, K.learning_phase()], [dae.get_layer("hidden").output])
outputs = [encoder([x, int(False)])][0][0] # [0][0] to index into nested list of len 1
print(outputs.shape)
# (32, 80)

However, as of Tensorflow 2.0.0-rc2, this will not work with eager execution enabled - disable via:

tf.compat.v1.disable_eager_execution()
Sign up to request clarification or add additional context in comments.

7 Comments

Well, it's unfortunate, but this woks only for an environment with tensorflow 1.14 (stable release version). I upgraded all of my packages via Anaconda, and I did not find a way to use conda install for tensorflow 2.0.0-rc2. So, I had to use pip install for TF 2.
@YoungMin It's why I pass on betas, especially for something as massive as TF that's not nearly bug-free even out of beta. One way you can improve compatibility is, put your TF2 install where your TF1 was, in Anaconda folders - then run conda update --all - did this once myself, worked (but no promises; back up your working conda environment just in case). Lastly, try passing in int(0) instead of K.learning_phase()
@YoungMin Also, I just looked through the source code, and a relevant function was changed: Function(), which was used to evaluate function(); instead, it's now being imported from tensorflow.python.keras.backend as tf_keras_backend, and ultimately running via theEagerExecutionFunction class here, which is pointed to by your error trace; it seems int(0) may actually be worse, as .op must evaluate to something valid
@YoungMin The lines are closer now, but still off - note that your line 3670 is master's 3652. Now, did you try running it without eager? Eager won't make it any easier to see the outputs with the code you're using - and it may be the source of the bug. If disabling it doesn't work, open up this file on this line locally, and add print(sources) there - then rerun everything and see output
Yeah, it worked without eager execution! Thanks @OverLordGoldDragon!
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.