/> Artificial Intelligence & Machine Intelligence – A Non-Scientist Point of View
top of page

Artificial Intelligence & Machine Intelligence – A Non-Scientist Point of View

Artificial Intelligence, or AI as most everyone calls it, is an exciting term with very little agreement on what it actually means. Defining the term is extremely challenging, because unlike defining “cat” which is easily agreed upon and can be pointed to, AI is more of a concept. This challenge is more similar to defining “liberty”, where everyone has their personal idea of what it means. Getting even five people to agree on a definition can sometimes be almost impossible.


Working for Captario, I can see that what we do today was easily being considered AI ten years ago, but today not being so extraordinary. Now what we do is more comfortably called Machine Intelligence (MI). As the dreaded “Sales Guy”, I have been providing technology solutions to my customers for twenty plus years. Over this time, AI’s changing definition has plagued my efforts to be transparent and accurate in describing the shifting portfolio of products and solutions I have sold. I have debated several times with Captarian analysts whether features (existing and proposed) should be called AI, MI, or just clever workflows. We use an extremely flexible modeling schema to analyze pharmaceutical industry challenges. We then put the complex models through a proprietary Monte Carlo simulation engine to answer the hard questions that at first seemed beyond answer.


I have chosen my own definition, which is that;

Artificial Intelligence is the computer-based technology by which humans can off-load challenging but routine work to machines.”

But Dave, you say, that’s too simple?


My response is “what is wrong with that?” The definition from the experts is constantly evolving and looks different every five years or so as the bar of “intelligence” for non-human thought/process keeps getting raised. Non-human thought, what does that mean…


AI and MI are all about using machines (computers) to relieve humans from doing routine tasks by crunching numbers more efficiently than a human brain can. The complexity of “routine” is what keep changing. The visionaries who feel AI is all about computers that can think independently and develop their own code are trying too hard to figure out the number of angels dancing on a pinhead. Technology that advances for the sake of novelty without benefitting humanity is neither useful nor desirable.


The value of separating AI from MI is minimal. I understand the viewpoint of the purists who envision AI’s creating new lines of thought and discovery. Pure AI is exciting but also the Pandora’s box feared by Stephen Hawking who stated "the development of full artificial intelligence could spell the end of the human race”, a distrust that goes a little too far I think, but I understand some of his concerns.


Isaac Asimov’s laws of robotics, where the first law is that a robot shall not harm a human, or by inaction allow a human to come to harm, the second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself, sound like a possible safeguard, but won’t prevent abuses by AI/MI. The immensely powerful cloud computing platforms of Amazon, Google, Microsoft, and others are real, practical, and alive today . Stephen Hawking expressed his concern of AI as the question of what full AI is exceeds my scope.

In my reading I have seen additional concerns that spell out less of a doomsday that AI and MI have inherent bias that needs to be addressed. No system is perfect. Trial and error is often the best path forward to identify inequity. Potentially the best path forward is to create a framework for researchers and intelligence scientists under which to operate.

Machine Intelligence (which I prefer to the ominous overtones of AI) is then a tool for humans to off-load routine and tedium from humanity to machines who don’t get bored or lazy, and are prone to less errors caused by inattentiveness. An ideal use of MI is the algorithms used by Google and LinkedIn to scour the Internet and alert me to news on topics like Drug Discovery or a connection who was recently in the news. The almost impossible task of searching out news on all my contacts on a weekly basis has been automated. When exciting promotions or innovations occur they are delivered to me versus me actively searching them out or just missing them altogether.

These same directed (semi-directed) searches are then built upon to keep me alerted to similar companies by the MI/AI. I have valuable information delivered to me that I never thought to even look for. Google News knows me and constantly drops unexpected but valuable articles into my news feed. This is something my Boston Globe digital subscription never quite accomplishes.

LinkedIn and Google are just two of the tools running MI algorithms on my behalf. I also receive machine driven investment advice from Fidelity and Motley Fool. There are dozens (hundreds) of machine bots running to feed me data. By the same token there are thousands of other machine algorithms running to analyze me and serve my information to their masters. Have you ever been talking with a friend about buying a new product and “magically” Amazon puts it in front of you when you go to their site? How many times has Google’s autofill jumped to your question in three keystrokes or less? There is definitely an Orwellian overtone to some of the help I receive on a daily basis via my smartphone and PC.

So, what’s my point? AI and MI are exciting tools that can a bit terrifying. The visionaries like Stephen Hawking and Isaac Asimov have shared their opinions, but as a technology solutions salesman I feel that Ronald Reagan stated the best policy – Trust but verify.


bottom of page