ISSN: 2229-371X

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Editorial Open Access

Machine Learning 2018: Overview of recent advances of deep learning application in computer vision - Abed Benaichouche- Inception Institute of Artificial Intelligence

Abstract

Lately, profound learning (DL) has won various challenges in PC vision and AI. In this introduction, we will introduce genuine uses of Conventional Neuronal Network (CNN), Recurrent Network (RNN) and Generative Adversarial Network (GAN) in PC vision territory. In the introduction, we will show a determination of late research that Inception Institute of Artificial Intelligence (IIAI) is driving in the field of PC vision and man-made brainpower. For the CNN, we will introduce its application for face identification and explanation, demo for object recognition and posture camera estimation. For the GANs, we will show its utilization for picture colorization and workmanship style move. Lastly, we present another methodology for face identification and super-goals utilizing both CNN and GAN models. For each demo we present the structured system, its constraints and the give points of view for conceivable improvement.With the recent advancement in digital technologies, the size of data sets has become too large in which traditional data processing and machine learning techniques are not able to cope with effectively .However, analyzing complex, high dimensional, and noise-contaminated data sets is a huge challenge, and it is crucial to develop novel algorithms that are able to summarize, classify, extract important information and convert them into an understandable form . To undertake these problems, deep learning (DL) models have shown outstanding performances in the recent decade.Deep learning (DL) has revolutionized the future of artificial intelligence (AI). It has solved many complex problems that existed in the AI community for many years. In fact, DL models are deeper variants of artificial neural networks (ANNs) with multiple layers, whether linear or non-linear. Each layer is connected to its lower and upper layers through different weights. The capability of DL models in learning hierarchical features from various types of data, e.g., numerical, image, text and audio, makes them powerful in solving recognition, regression, semi- supervised and unsupervised problems. In recent years, various deep architectures with different learning paradigm are quickly introduced to develop machines that can perform similar to human or even better in different domains of application such as medical diagnosis, self-driving cars, natural language and image processing, and predictive forecasting. To show some recent advances of deep learning to some extent, we select 14 papers from the articles accepted in this journal to organize this issue. Focusing on recent developments in DL architectures and their applications, we classify the articles in this issue into four categories: (1) deep architectures and conventional neural networks, (2) incremental learning, (3) recurrent neural networks, and (4) generative models and adversarial examples.Deep neural network (DNN) is one of the most common DL models that contains multiple layers of linear and non-linear operations. DNN is the extension of standard neural network with multiple hidden layers, which allows the model to learn more complex representations of the input data. In addition, convolutional neural network (CNN) is a variant of DNNs, which is inspired by the visual cortex of animals. CNN usually contains three types of layers, including convolution, pooling, and fully connected layers. The convolution and pooling layers are added in the lower levels. The convolution layers generate a set of linear activations, which is followed by non-linear functions. In fact, the convolution layers apply some filters to reduce complexity of the input data. Then, the pooling layers are used for down-sampling of the filtered results. The pooling layers manage to reduce the size of the activation maps by transferring them into a smaller matrix. Therefore, pooling solves the over-fitting problem by reducing complexity. The fully connected layers are located after the convolution and pooling layers, in order to learn more abstract representations of the input data. In the last layer, a loss function, e.g., a soft-max classifier, is used to map the input data to its.

Biography:

Abed Benaichouche worked at Inception Institute of Artificial Intelligence, UAE

Abed Benaichouche

To read the full article Download Full Article