Sunday 1 April 2018

seq2seq

If we take a high-level view, a seq2seq model has encoder, decoder and intermediate step as its main components:
If we take a high-level view, a seq2seq model has encoder, decoder and intermediate step as its main components:

The encoder-decoder model provides a pattern for using recurrent neural networks to address challenging sequence-to-sequence prediction problems, such as machine translation.

Encoder-decoder models can be developed in the Keras Python deep learning library and an example of a neural machine translation system developed with this model has been described on the Keras blog, with sample code distributed with the Keras project.

No comments:

Post a Comment