In the last few years, a machine learning field named Deep-Learning (DL) has improved the results of several challenging tasks mainly in the field of computer vision. Deep architectures such as Convolutional Neural Networks (CNN) have been shown as very powerful for computer vision tasks. For those related to language and timeseries the state of the art models such as Long Short-Term Memory (LSTM) have a recurrent component that take into account the order of inputs and are able to memorise them. Among these tasks related to Natural Language Processing (NLP), an important problem in computational linguistics is authorship attribution where the goal is to find the true author of a text or, in an author profiling perspective, to extract information such as gender, origin and socio-economic background. However, few work have tackle the issue of authorship analysis with recurrent neural networks (RNNs). Consequently, we have decided to explore in this study the performances of several recurrent neural models, such as Echo State Networks (ESN), LSTM and Gated Recurrent Units (GRU) on three authorship analysis tasks. The first one on the classical authorship attribution task using the Reuters C50 dataset where models have to predict the true author of a document in a set of candidate authors. The second task is referred as author profiling as the model must determine the gender (male/female) of the author of a set of tweets using the PAN 2017 dataset from the CLEF conference. The third task is referred as author verification using an in-house dataset named SFGram and composed of dozens of science-fiction magazines from the 50s to the 70s. This task is separated into two problems. In the first, the goal is to extract passages written by a particular author inside a magazine co-written by several dozen authors. The second is to find out if a magazine contains passages written by a particular author. In order for our research to be applicable in authorship studies, we limited evaluated models to those with a so-called many-to-many architecture. This fulfills a fundamental constraint of the field of stylometry which is the ability to provide evidences for each prediction made. To evaluate these three models, we defined a set of experiments, performance measures and hyperparame-ters that could impact the output. We carried out these experiments with each model and their corresponding hyperparameters. Then we used statistical tests to detect significant di˙erences between these models, and with state-of-the-art baseline methods in authorship analysis. Our results shows that shallow and simple RNNs such as ESNs can be competitive with traditional meth-ods in authorship studies while keeping a learning time that can be used in practice and a reasonable number of parameters. These properties allow them to outperform much more complex neural models such as LSTMs and GRUs considered as state of the art in NLP. We also show that pretraining word and character features can be useful on stylometry problems if these are trained on a similar dataset. Consequently, interesting results are achievable on such tasks where the quantity of data is limited and therefore diÿcult to solve for deep learning methods. We also show that representations based on words and combinations of three characters (trigrams) are the most e˙ective for this kind of methods. Finally, we draw a landscape of possi-ble research paths for the future of neural networks and deep learning methods in the field authorship analysis.