The analysis of biomedical signals, such as the EEGs for measuring brain activity, provides means for the diagnosis of various cognitive tasks and neural disorders. These signals are frequently transformed into visual representations such as spectrograms, which can reveal characteristic patterns and serve as a basis for classification, when extracting specific features from them. We designed a new method that uses spectrogram images to feed them without any feature selection/extraction procedure directly into a deep convolutional neural network architecture and train it for the classification of motor impairment neural disorder in a person. The proposed method was tested on a set of (un)impaired subjects, where it outperformed the traditional machine learning methods. The results, obtained without any human intervention and by using all the default parameter values, turned out not to lag much behind an established state-of-the-art method, that takes advantage of using domain knowledge for the analysis of EEG recordings. Based on the experimental results we believe that the proposed method can be considered as a sound basis for further optimization towards a competitive, fully automated method for classification of EEG signals.