35 C
Dhaka
Tuesday, April 23, 2024

The Paradox of Artificial Intelligence (Part-3)

It is a common notion now-a-days that with increasing power of AI, at some points machines may rule over humans. Can that really happen?

Let us have a look that how machines actually learn and act. Commonly it can be divided into some basic categories, such as:

* Supervised Learning : In supervised learning, machines act based on a given data set and pre-defined correct output.For example , deciding whether the weather is good or bad based on certain weather parameters like: temperature, humidity, air flow.

* Unsupervised Learning : In this case , machines process input data without any pre-defined instruction regarding correct output. For example: taking a sample of 1000 people and and grouping them into certain segments based on their residential location.

* Reinforcement Learning : In this scenario no training data exists, so machines basically learn continuously from experience and keeps updating its actions accordingly. For example: translation between languages.

If we look at the similar attributes for human beings- supervised learning can be considered as usual learnings that we go through in educational institutions with defined contents and instructors. Unsupervised learning is other systematic ways of learning where we make judgement by ourselves. And reinforcement learning means continuously learning from different experiences in life.

If we look at ourselves, we find that a major portion of learning is reinforcement learning, which we are actually going through in our entire life journey.  This learning does not have any binding or goals, and obviously every human being has his/her own way to observe, adopt and learn things. That creates a unique difference among people, and dominantly between machines and humans.

Let us consider a particular case. Medical image processing is a common application of AI; where automated diagnosis is the goal through processing of respective output images like X-ray, ultrasound,CT Scan, MRI etc. Now, machines need certain level of predefined inputs and output to perform such processing. If anything appears beyond that domain, machines may fail to identify. Suppose, scissor is accidentally left inside patient’s body after a surgery. In X-ray image, the scissor becomes clearly visible. So any person can detect its presence by just having a look at the image. However, a machine learning algorithm may fail to identify that as ideally it is not expected to know how scissor looks like inside a human body, so it may rather just consider it as any part of human body!!

Another such example in recent time was Amazon’s recruitment dilema. Since 2014 Amazon was using automated algorithm to review job applicants’ résumés. The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars .But by 2015, the company realized its new system was not rating candidates for technical posts in a gender-neutral way.That is because Amazon’s computer models were trained to vet applicants by observing patterns in résumés submitted to the company over a 10-year period. Most of those past resumes came from men, a reflection of male dominance across the tech industry.In effect, Amazon’s system taught itself that male candidates were preferable. It penalized résumés that included the word “women’s”, as in “women’s chess club captain”. And it downgraded graduates of two all-women’s colleges !!! Amazon attempted to rectify the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory. So ultimately this automation endeavor got disbanded.

So, as the functional theories along with such practical examples are showing, still it can take some more time for the machines to reach near the intelligence, analytical and critical thinking capabilities of human beings. Can that really happen and if it does, what will be the impact ? We shall talk about that in the next part of this article series.

 

Related Articles

CLOUD COMPUTING SECURITY

Cloud Computing Security Issues, Threats and Controls

0
Cloud Computing and service models  The official NIST definition (NIST 800-145) of cloud computing says, “Cloud Computing is a model for enabling ubiquitous, convenient, on-demand...
API and Open Banking

API and Open Banking: Way for New Service Innovation for Banks and FinTech Companies

0
The people who gathered at a hall room of a city hotel in last month had one thing in common—they all are working in...
ISO 2001

ISO 27002: 2022 Implementation vs Reality

0
After almost a decade, ISO27001: 2013 is going to publish its new iteration of ISO27001:2022 in second (2nd) Quarter this year1. But prior to...
Deepfakes: The Synthetic Media I want to believe

Deepfakes: The Synthetic Media I want to believe

0
What Are Deepfakes? A deepfake is a sort of "synthetic media," which refers to material (such as images, audio, and video) that has been modified...
The power of API platforms

The power of API platforms brings the open banking promise into sharper focus

0
Open banking is a global phenomenon whose merits are felt in virtually every time zone, including those in the Asia-Pacific region. In contrast to...
Blockchains Gaming and Collusion

“Blockchains: Gaming and Collusion- A Reading in Political Economy”:  Futuristic Exploration with Fact-based Analysis

0
In this digital age, it has become quite common for us to constantly remain mesmerized by fascinating technologies.  However, deeper thoughts about those technologies,...