Mshabab Alrizah

Deep Learning

Deep learning in neural networks: An overview

In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their *credit assignment paths*, which are chains of possibly learnable, causal […]

Deep learning in neural networks: An overview Read More »

Recurrent convolutional neural network for object recognition

In recent years, the convolutional neural network (CNN) has achieved great success in many computer vision tasks. Partially inspired by neuroscience, CNN shares many properties with the visual system of the brain. A prominent difference is that CNN is typically a feed-forward architecture while in the visual system recurrent connections are abundant. Inspired by this

Recurrent convolutional neural network for object recognition Read More »

Batch-normalized Maxout Network in Network

This paper reports a novel deep architecture referred to as Maxout network In Network (MIN), which can enhance model discriminability and facilitate the process of information abstraction within the receptive field. The proposed network adopts the framework of the recently developed Network In Network structure, which slides a universal approximator, multilayer perceptron (MLP) with rectifier

Batch-normalized Maxout Network in Network Read More »

Deep Learning using Linear Support Vector Machines

In this paper, they [have demonstrated] a small but consistent advantage of replacing the softmax layer with a linear support vector machine [in fully-connected and convolutional neural networks]. Learning minimizes a margin-based loss instead of the cross-entropy loss. While there have been various combinations of neural nets and SVMs in prior art, [their] results using

Deep Learning using Linear Support Vector Machines Read More »