Skip to main content
Lead Story

The learning curve

The ability to learn and their utility as tools for thinking give neural networks and Artificial Intelligence their unique place in human history.

Roughly 2.5 million years ago, Homo habilis made the first tools of any living species. Used for hunting, these tools made the beginning of a long journey, during which the genus Homo learned and passed on skills. Homo sapiens took this skill to new heights, perfecting exquisitely sophisticated tools for a variety of tasks, some of which were capable of performing their jobs without human help. In the late 20th and 21st centuries, human beings began a completely new journey.

Some tools that Homo sapiens now make can learn and improve on their own, a feature that has earned them the monikers Machine Learning (ML) and Artificial Intelligence (AI). As our first story describes (Augmented Intelligence: AI in the service of science), these tools are used more as an aid to thought than an aid to physical work. These two features – the ability to learn and as tools for thinking – make AI and ML unique in human history, with the same historical significance as the first flake tools of Homo habilis.

Machine Learning, traced back to its roots, is nearly a century old. Early attempts at ML, in the 1940s and 1950s, involved statistical methods. In 1958, the term machine learning was coined by Arthur Samuel, an IBM employee and AI pioneer. These continue to be used in some sectors even today, as they provide reliable and explainable solutions. An example is an algorithm to classify a list of men and women based on their heights. It goes like this: Using a window, define the average height of men and women, respectively. If the input height falls into the window allowed for men (or women), give the output that the subject is a man (or woman). This simple algorithm employs basic statistics such as average height to determine the answer to a query.

Statistical methods such as the Gaussian mixture models or the Hidden Markov model, which is used in handwriting recognition, can be very sophisticated. These are also learning models; from the data, the machine first learns the average height of men and women respectively, and it learns to classify the input height as a description of a man or woman. Statistical methods of automated learning were used in defence (such as radar and sonar) and in finance.

In the 1970s, statistical methods came to be used for speech processing. Bishnu Atal, an engineer at Bell Labs, used them to improve speech algorithms in the 1980s. His work revolutionised the voice and telephony circuit.

"There is a great advantage to statistical models," says S. Umesh, Professor in the Electrical Engineering department of the Indian Institute of Technology (IIT) Madras. "Because the theory comes with upper and lower bounds, we know exactly what will work," says Umesh, who works in automatic speech recognition.

CONTINUE READING

Get unlimited digital access on any device.

Get the print magazine delivered at home.

Subscribe

PAST ISSUES - Free to Read

share-alt-square
Volume 01 Issue 04 Jul-Aug 2022
Read This Issue
share-alt-square
Volume 01 Edition 03 Sep-Oct 2021
Read This Issue
Search by Keywords, Topic or Author

© 2024 IIT MADRAS - All rights reserved

Powered by RAGE