FCS News

» Go to news main

Deep Learning & Neural Networks

Posted by CS Magazine on October 3, 2014 in Research, Graduate, Faculty, News, Research, Big Data & Machine Learning
Dr. Trappenberg in HAL Lab
Dr. Trappenberg in HAL Lab

 

Originally shared in the Fall 2014 CS Magazine.

How HAL lab uses machine learning to better understand our own brains

Well known companies such as Google, Amazon and Facebook, as well as many smaller tech companies, are hiring computer scientists with backgrounds in machine learning.

Machine learning—the art of teaching machines from data— has matured considerably in the last few years. Such methods are now behind many advanced data mining techniques such as speech recognition on android phones or image search on Google. Indeed, machine learning is a major technique for analyzing big data, a marriage made in digital heaven.

Within machine learning, there is now a new old kid in town named deep learning. Deep learning mostly refers to good old neural networks that were popular in the late 1980s and early 1990s. Similar to what is now visible, business journals were raving at that time about the possibilities that such methods were bringing to data mining and forecasting. By the late 1990s, however, it seemed that progress had halted and the level where machines could compete with humans in tasks like object recognition and speech analysis could not be reached. Methods like causal modeling then took over and neural networks even got a bad name.

While neural networks have been lying low during the following two decades, much progress was made in understanding them. It is now understood that not enough example data and computer power existed in the 1990s to get to the regime where these networks could outperform humans.

Deep learning on the rise

The availability of fast graphic processor units (GPUs) has played a big factor in progress. GPUs are very good at crunching numbers in matrix operations behind most graphic rendering—and neural networks use similar operations. With the help of GPUs, networks can now be trained in three weeks, rather than months on regular workstations.

Big data has also played a major factor in progress. Many companies are collecting lots of data but do not yet know how to use it efficiently. Most basic approaches of neural networks are based on supervised learning where labeled data is needed. Companies have invested in using such data through crowd sourcing and now have databases with millions of pictures that are labeled with thousands of different categories.

All of these advances mean that large neural networks can now be built. Not only can these networks be large, they can now have many stages of representations of the data.

These many layers are what deep learning is all about. Deep networks are now winning many different data mining competitions, resulting in new state of the art approaches to things like speech recognition and computer vision.

“It is certainly an exciting field where new applications are within reach,” says Dr. Thomas Trappenberg.

HAL Lab

Dr. Thomas Trappenberg of the Faculty of Computer Science, runs the Hierarchical Anticipatory Learning (HAL) Lab. The HAL Lab works in three areas that are essentially connected: computational neuroscience, machine learning and robotics.

“We are most interested in understanding how the brain works—in particular how activities in neurons and the architecture of the brain enables high-level thinking,” says Dr. Trappenberg. “A central ingredient for all of this is how humans and animals learn. This brings us to the scientific area of machine learning.”

A lot of progress has been made recently in understanding the important principles of learning. The lab also has the added benefit of using these methods for data analysis or data mining.

“We usually make computer simulations to study brains but we want our research to lead us to building models of how the brain really works,” he continues.

“We now think that an even better way to study and evaluate these models is to build artificial agents—robots— to show that they can do high-level tasks like finding objects or planning movements.”

Hal Lab projects

Many research projects from the HAL Lab cross over between two of the three research areas (computational neuroscience, machine learning and robotics), combining the strengths of the entire team.

As an example, the lab works with local company, Mindful Scientific, to apply machine-learning techniques to understand EEG data in order to evaluate possible brain injuries, combining machine learning with neuroscience.

The lab works with another company, Pleiades, to make a drone (flying quadcopter) that can follow objects while learning that the appearance of these objects can change, combining robotics and machine learning.

The team is also building a biologically realistic robot arm controller that can move to a target even when the camera input is sometimes interrupted, combining robotics with neuroscience.

Ultimately all three areas are tightly interwoven and the HAL Lab hopes to play an important role in the continued progress of deep learning.