The brave new frontiers of computing

  • Alternative paradigms needed today to crack problems that cannot be solved by classical computing
  • One such paradigm is deep learning, which works in much the same way the human brain works

The brave new frontiers of computingTHE Von Neumann reference architecture and the sequential processing of Turing machines have been the basis for ‘classical’ computers for the last six decades.
 
The juggernaut of technology has resulted in faster and denser processors being churned out inexorably by the semiconductor industry, substantiating Gordon Moore’s claim of transistors’ density in chips doubling every 18 months, now famously known as Moore’s Law.
 
These days we have processors with an excess of a billion transistors. We are now reaching the physical limit of the number of transistors on a chip.
 
There is now an imminent need to look at alternative paradigms to crack problems of the Internet age which cannot be solved by classical computing.
 
In the last decade or so, three new, radical and lateral paradigms have surfaced which hold tremendous promise. They are: i) deep learning ii) quantum computing and iii) genetic programming.
 
These techniques hold enormous potential and may offer solutions to problems which would take classical computers anywhere between a few years to a few decades to solve.
 
The brave new frontiers of computingDeep learning
 
Deep learning is a new area of Machine Learning research. The objective of deep learning is to bring Machine Learning closer to one of its original goals, namely Artificial Intelligence.
 
Deep learning is based on multilevel neural networks called deep neural networks. Deep learning works on large sets of unclassified data and is able to learn lower level patterns on which it builds higher level representations much in the same way the human brain works.
 
Deep learning tries to mimic the human brain For example, the visual cortex shows a sequence of areas where signals flow from one level to the next. In the visual cortex the feature hierarchy represents input at a different level of abstraction, with more abstract features further up in the hierarchy, defined in terms of the lower-level ones.
 
Deep learning is based on the premise that humans organise ideas hierarchically and compose more abstract concepts from simpler ones.
 
Deep learning algorithms generally require powerful processors and work on enormous amounts of data to learn key features. The characteristic of deep learning algorithms is that the input is passed through several ‘non-linearities’ before generating its output.
 
About three years ago, researchers at Google's Brain ran a deep learning algorithm on 10 million still images extracted from YouTube, on thousands of extremely powerful processors called GPUs (graphics processing units).
 
Google's Brain was able independently infer that these images consisted of a preponderance of cat videos. A seemingly trivial result, but of great significance as the algorithm inferred this result without any other input!
 
An interesting article in Nature discusses how deep learning has proved useful for several scientific tasks including handwriting recognition, speech recognition, natural language processing, and in analysing three-dimensional images of brain slices, etc.
 
The importance of deep learning has not been lost on the techology titans like Google, Facebook, Microsoft and IBM, which have all taken steps to stay ahead in this race.
 
Chiang Kai Hua is general manager of Global Technology Services at IBM Malaysia
 
Related Stories:
 
Five innovations that will change our lives within 5yrs: IBM
 
Hello, computer: Google’s baby step towards a Star Trek computer
 
IBM’s ‘cognitive computing’ challenge to mobile developers
 
 
For more technology news and the latest updates, follow us on TwitterLinkedIn or Like us on Facebook.

 
Keyword(s) :
 
Author Name :
 
Download Digerati50 2020-2021 PDF

Digerati50 2020-2021

Get and download a digital copy of Digerati50 2020-2021