The latest technological buzz word that seems to be popping up in the news everywhere lately is the term “neural network.” If you come across a state-of-the-art app or piece of software that complete a task or solve a problem that you thought couldn’t otherwise be executed by a computer, then there’s a good chance that software is being powered by a neural network.
So what are Neural Networks anyway? Neural Networks are essentially a subfield of artificial intelligence, trying to replicate the human ability to think using layers upon layers of algorithms.
Machines are incredibly good at reproducing simulations of real world scenarios, however, it can be very difficult. Especially when one considers how messy the real world can be, not always following a logical structure or defined rules. The real world is, in a nutshell, unpredictable. So instead of writing software and programs that solve problems, programmers are now developing software that learns how to solve problems. Problems like this can eventually learn to adapt to different scenarios, and approach problem-solving from a perspective that humans might not even consider.
You can see examples of “software that learns” already in action in everyday devices; from phones that can translate voice commands, email spam filters, ATM machines that can recognize signatures, photo applications that automatically organize images into collections and galleries, facial recognition software, and many others.
Mike Butcher of TechCrunch, recently wrote an article about an update to easily the most popular keyboard app for both Android and iPhone users, SwiftKey. The new Alpha of the software claims to be supported by a neural network that can “think for itself” and anticipate what you will want to type next. SwiftKey has always been powered by a number of preset algorithms, what they call their “N-Grain” technology, which was already pretty good at accurately predicting your next word. Its major limitation however, was that it couldn’t understand the underlying meanings of words.
Jordan Novet of VentureBeat discusses how Google is experimenting with a neural network to help YouTube auto-select the best thumbnail image from your uploaded video. This is not anything new from Google, who has also been dealving into machine learning for both their translation software and their Photos app.
Scientists are still working on developing the best ways to teach computers how to learn and problem solve. The best solution we have at our disposal thus far are algorithms that mimic human thinking and rationality, which are known as Artificial Neural Networks. In short, Neural Networks consist of a combination of smaller mathematical algorithms that are directed to “speak” to each other in order to complete a task or solve a problem. The process that Neural Networks undergo to “train” itself to solve a problem is known as “deep learning,” because often, multiple algorithms (neurons) are needed to communicate with each other in a layered structure in order become effective. Tremendous strides have been taking to improve the learning speed of a computer, but there is still much progress needed before computers can match the learning speed of a human being.