Defining Learning in the context of Artificial Intelligence by Cris Doloc

May 07, 2018
5 min read
Defining Artificial Intelligence

The term “Artificial Intelligence (AI) is one of the most overused and some would say, abused terms of today’s tech parlance. Do people really understand the meaning of AI? Or is it merely one of these buzzwords like “Big Data”, “IoT”, “Blockchain”, or “Quantum Computing” meant to raise interest and dollars for new startups?

One of today’s greatest computational theorists and computer scientists, Dr. Leslie Valiant, Professor of Computer Science and Applied Mathematics at Harvard, who is the author of the “Probably Approximately Correct” learning model [1], is very well known for never using the term AI.

At the beginning of his scientific career, while talking to the famous Edsger Dijkstra (one of the most influential computer scientists that ever lived, see Dijkstra’s algorithm), Dr. Valiant was asked about the subject of research that he worked on. After proudly responding “AI”, Dijkstra said: “Why don’t you work first on the ‘Intelligence’ part?” That was a “Wow” moment for Dr. Valiant and has prompted him to dedicate most of his scientific career to studying the mechanisms of Learning.

Defining Learning in the context of Artificial Intelligence- Cris Doloc - ASSIST Software - blog-guest
Machine Learning: What it is and why it matters?

A term that feels a lot more anchored in the reality of Nature is “Machine Learning”. Starting with the seminal work of Alan Turing and Von Neumann in the 40’s, Machine Learning became much later a practical reality of the advent of High-Performance Computing technologies.

The field of Machine Learning is built upon the concept of “Learning”, which is believed to be central to the notion of “Intelligence”. Human learning is the ability to achieve Intelligence via a collection of yet to be identified algorithms that are “hard-wired” in the human brain. Although we are a far cry from understanding this very complex biological system, ML is in widespread use nowadays because its algorithms are able to use “learnable” regularities in all kinds of data: text, images, genomic codes, or electrical signals. 

Humanity is at the point where it could merely start emulating human intelligence with the help of machines (computer systems). This grandiose goal of coding the concept of “Artificial Intelligence” is proving to be so much more difficult to achieve than it was initially advertised!

Faced with any computational task, a computer could either be “Programmed” to handle it, or it could be made to “Learn” how to handle it. Or maybe a combination of the two techniques will do it! Because of its inherent statistical nature, Learning cannot be error-free, as opposed to the programmatic approach where eventually one could create faultless code.

So, apparently Learning may be at a disadvantage compared to Programming. And this is evident when one deals with applications where one knows how to specify the model (hypothesis) and the outcome; the best solution is to simply code it, assuming that it is possible.

Unfortunately, there are many situations where coding is just not possible! Therefore Learning becomes an absolutely indispensable tool for situations where one does not know how to model the behavior of the system or what outcome to expect. The only hope is in the data, more precisely, learning from it!

Defining Learning in the context of Artificial Intelligence- Cris Doloc - ASSIST Software - blog-guest
Types of Learning

There are numerous examples of success in today’s world, where Learning has come to the rescue, while still using programming as an essential tool to complete the task: spam filtering, recommender systems, Natural Language Processing, Computer Vision etc.

Learning is not just an abstract concept in the realm of “Artificial Intelligence”, but it represents its central pivot point. Without Learning there will be no Intelligence! Learning could be used to detect regularities in the most complex and “theoryless” data.

In the realm of Learning, there are two schools of thought: Statistical Learning and Machine Learning. The former one assumes that the data is generated by a given “stochastic” data model, while the latter uses algorithmic, “theoryless” models that treat data mechanism as unknown [2].

While Statistical Learning has been widely adopted by statisticians, it has led to “irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems”, according to Dr. Leo Breiman Professor at Berkeley.

On the contrary, Machine Learning has developed into a mature discipline that is currently used for analyzing large and complex data sets. The novelty that ML has injected into the scientific community is that it has shifted the emphasis from the development of theoretical models to problem-solving methodologies by using data as the ultimate messenger of reality! 

To break it down in several algorithmic-like steps, Learning could be encoded as:

  1. PERCEPTION - the ability to use our 5(?!) senses to process (label) our current “state” in the natural habitat

Defining Learning in the context of Artificial Intelligence- Cris Doloc - ASSIST Software - blog-guest
  1. ACQUISITION – the ability to acquire, process (filter) and store information from the signals transmitted by all these sensors

  2. COMPUTATION – the capacity to computationally process (in the brain) these input signals and produce outputs that are actionable

  3. ACTIONABILITY – the capability to act upon the outcomes of the computation by generating responses that are commensurate with the stimuli received from the natural sensors

  4. REPEATABILITY – the skill to repeat the process as many times as needed in order to optimize the innate, yet unknown Cost Function that will ensure the efficiency of the whole process!

Conclusion

In recent years, Machine Learning has become an almost ubiquitous tool in our daily lives. The main reason for it is that ML had benefitted enormously from 3 important developments:

  • access to better algorithms (last 50 years)

  • availability of much faster computer platforms (GPUs, FPGAs - last 10 years)

  • availability of a LOT of data

In fact the availability of enormous amounts of data that could be “learned” from represents a major development for our civilization.  One calls this the “Fourth Paradigm” of scientific discovery.

More about this in the next post!

Suggested readings from Cris Doloc Ph.D. FintelligeX Inc., Chicago

[1] The book “Probably Approximately Correct” by Prof. Leslie Valiant, published in 2013 by Basic Books

[2]  Statistical  Modeling:  The  Two  Cultures,  by  Dr.  Leo  Breiman,  Statistical
Science, 20011, Vol. 16, No.3 199-231

Share on:

Want to stay on top of everything?

Get updates on industry developments and the software solutions we can now create for a smooth digital transformation.

* I read and understood the ASSIST Software website's terms of use and privacy policy.

Frequently Asked Questions

ASSIST Software Team Members

See the past, present and future of tech through the eyes of an experienced Romanian custom software company. The ASSIST Insider newsletter highlights your path to digital transformation.

* I read and understood the ASSIST Software website's terms of use and privacy policy.

Follow us

© 2024 ASSIST Software. All rights reserved. Designed with love.