कया आप AI के नुकसान के बारे मे जानते हो, सभी के लिए जानना जरुरी है ।
The real risks of artificial intelligence
If you believe some AI-watchers, we are
racing towards the Singularity – a point at which artificial intelligence
outstrip
s our own and machines go on to improve themselves at an exponential
rate. If that happens – and it’s a big if – what will become of us?
In the last few years, several
high-profile voices, from Stephen Hawking to Elon Musk and Bill Gates have
warned that we should be more concerned about possible dangerous outcomes of super
smart AI. And they’ve put their money where their mouth is: Musk is among
several billionaire backers of OpenAI, an organization dedicated to developing
AI that will benefit humanity.
But for many, such fears are overblown.
As Andrew Ng at Stanford University, who is also chief scientist at Chinese
internet giant Baidu, puts it: fearing a rise of killer robots is like worrying
about overpopulation on Mars.
That’s not to say our increasing
reliance on AI does not carry real risks, however. In fact, those risks are
already here. As smart systems become involved in ever more decisions in arenas
ranging from healthcare to finance to criminal justice, there is a danger that
important parts of our lives are being made without sufficient scrutiny. What’s
more, AIs could have knock-on effects that we have not prepared for, such as
changing our relationship with doctors to the way our neighborhoods’ are
policed.
What exactly is AI? Very simply, it’s
machines doing things that are considered to require intelligence when humans
do them: understanding natural language, recognizing faces in photos, driving a
car, or guessing what other books we might like based on what we have
previously enjoyed reading. It’s the difference between a mechanical arm on a
factory production line programmed to repeat the same basic task over and over
again, and an arm that learns through trial and error how to handle different
tasks by itself.
How is AI helping us? The leading
approach to AI right now is machine learning, in which programs are trained to
pick out and respond to patterns in large amounts of data, such as identifying
a face in an image or choosing a winning move in the board game Go. This
technique can be applied to all sorts of problems, such as getting computers to
spot patterns in medical images, for example. Google’s artificial intelligence
company Deep Mind are collaborating with the UK’s National Health Service in a
handful of projects, including ones in which their software is being taught to
diagnose cancer and eye disease from patient scans. Others are using machine
learning to catch early signs of conditions such as heart disease and
Alzheimers.
Artificial intelligence is also being
used to analyze vast amounts of molecular information looking for potential new
drug candidates – a process that would take humans too long to be worth doing.
Indeed, machine learning could soon be indispensable to healthcare.
Artificial intelligence can also help
us manage highly complex systems such as global shipping networks. For example,
the system at the heart of the Port Botany container terminal in Sydney manages
the movement of thousands of shipping containers in and out of the port,
controlling a fleet of automated, driverless straddle-carriers in a completely
human-free zone. Similarly, in the mining industry, optimisation engines are
increasingly being used to plan and coordinate the movement of a resource, such
as iron ore, from initial transport on huge driverless mine trucks, to the
freight trains that take the ore to port.
AIs are at work wherever you look, in
industries from finance to transportation, monitoring the share market for
suspicious trading activity or assisting with ground and air traffic control.
They even help to keep spam out of your inbox. And this is just the beginning
for artificial intelligence. As the technology advances, so too does the number
of applications.
So what's the problem? Rather than
worrying about a future AI takeover, the real risk is that we can put too much
trust in the smart systems we are building. Recall that machine learning works
by training software to spot patterns in data. Once trained, it is then put to
work analyzing fresh, unseen data. But when the computer spits out an answer,
we are typically unable to see how it got there.
There are obvious problems here. A
system is only as good as the data it learns from. Take a system trained to
learn which patients with pneumonia had a higher risk of death, so that they
might be admitted to hospital. It inadvertently classified patients with asthma
as being at lower risk. This was because in normal situations, people with
pneumonia and a history of asthma go straight to intensive care and therefore
get the kind of treatment that significantly reduces their risk of dying. The machine
learning took this to mean that asthma + pneumonia = lower risk of death.
As AIs are rolled out to assess
everything from your credit rating to suitability for a job you are applying
for to criminals’ chance of reoffending, the risks that they will sometimes get
it wrong – without us necessarily knowing – get worse.
Since so much of the data that we feed
AIs is imperfect, we should not expect perfect answers all the time. Recognizing
that is the first step in managing the risk. Decision-making processes built on
top of AIs need to be made more open to scrutiny. Since we are building
artificial intelligence in our own image, it is likely to be both as brilliant
and as flawed as we are.
Questions 1-
9
Complete the sentences below.
Write NO
MORE THAN TWO WORDS from the passage for each answer.
Write your answers in boxes 1- 9 on
your answer sheet.
1. Singularity is the point, where AI………… our
own machines.
2. Many people, including Stephen Hawking, Elon Musk and Bill
Gates warned us about possible………… of super smart AI.
3. According to Andrew Ng, fearing a rise of………….. is
similar to worrying about overpopulation on Mars.
4. There is a danger that many important parts of our
lives, like healthcare, finance and…………. will be without sufficient
scrutiny.
5. Simply put, AI is machines doing things that are
considered to require………….. when humans do them.
6. Nowadays, the main approach to AI is…………. .
7. Deep Mind in collaboration with the UK’s National
Health Service works on many projects, including the one where software learns
how to…………… and eye disease.
8. In the nearest future machine learning could be…………… to
healthcare.
9. AI might also help in managing………… networks.
Questions 10- 13
Do the following statements agree with
the information given in Reading Passage? In boxes 10- 13 on your answer sheet,
write
TRUE
if the statement agrees with the information
FALSE
if the statement contradicts the information
NOT
GIVEN
if there is no information on this
10. AI
works in many different industries nowadays.
11. We shouldn't put too much trust in AI in the future.
12. The quality of the data doesn't affect the ability of AI to
learn information correctly.
13. We can get perfect answers from AI all the
time.
Comments
Post a Comment