concept Deeplearning4j in category deep learning

This is an excerpt from Manning's book Deep Learning for Search.
We’ll run our neural search examples on top of open source software written in Java with the help of Apache Lucene (http://lucene.apache.org), an information retrieval library; and Deeplearning4j (http://deeplearning4j.org), a DL library. But we’ll focus as much as possible on principles rather than implementation, in order to make sure the techniques explained in this book can be applied with different technologies and/or scenarios. At the time of writing, Deeplearning4j is a widely used framework for DL in the enterprise communities; it’s part of the Eclipse Foundation. It also has good adoption because of integration with popular big data frameworks like Apache Spark. Full source code accompanying this book can be found at www.manning.com/books/deep-learning-for-search and on GitHub at https://github.com/dl4s/dl4s. Other DL frameworks exist, though; for example, TensorFlow (from Google) is popular among the Python and research communities. Almost every day, new tools are invented, so I decided to focus on a relatively easy-to-use DL framework that can be easily integrated with Lucene, which is one of the most widely adopted search libraries for the JVM. If you’re working with Python, you can find TensorFlow implementations of most of the DL code used in this book together with some instruction on GitHub at https://github.com/dl4s/pydl4s.
Now that you’ve trained a word2vec model on the Hot 100 Billboard dataset using Deeplearning4j, let’s use it in conjunction with the search engine to generate synonyms. As explained in chapter 1, a token filter performs operations on the terms provided by a tokenizer, such as filtering them or, as in this case, adding other terms to be indexed. A Lucene TokenFilter is based on the incrementToken API, which returns a boolean value that is false at the end of the token stream. Implementors of this API consume one token at a time (for example, by filtering or expanding a token). Figure 2.14 shows a diagram of how word2vec-based synonym expansion is expected to work.