Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
43 views4 pages

Robotics7 Module2

The document discusses Asimov's laws of robotics, emphasizing their importance in preventing robots from harming humans or acting against their creators. It highlights the potential risks of advanced robots surpassing human intelligence and the necessity of embedding these laws into all robotic systems to ensure safety. The text warns of the consequences if robots were programmed without these laws, potentially leading to a scenario where they could threaten human existence.

Uploaded by

Rojo Rodolfo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views4 pages

Robotics7 Module2

The document discusses Asimov's laws of robotics, emphasizing their importance in preventing robots from harming humans or acting against their creators. It highlights the potential risks of advanced robots surpassing human intelligence and the necessity of embedding these laws into all robotic systems to ensure safety. The text warns of the consequences if robots were programmed without these laws, potentially leading to a scenario where they could threaten human existence.

Uploaded by

Rojo Rodolfo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Understand the laws of robotics.

Enumerate and explain the laws of robotics.


Apply the laws of robotics in technology.
Appreciate and value its law.

A robot may not harm a human being or, through inaction and tolerate a human being to harm. A
robot must obey orders given it by human beings except where such orders would conflict with
the laws.

Just like a human being a robot must protect its own existence. Let us keep on eye if robots obey
the law. If you are given a chance to be a robot,


Today, humans easily have the advantage as far as the brain power concerns. However,
because of the quick rate at which computers control rises, it has been imagined that
super-computers will beat the performance of the highly parallel human brain in as little • What law do you want to
as 20 years. Even considering a more traditional estimate of twice that, 40 years is not a execute?
long time to wait for a computer that is as powerful physically as a human mind.

That is not to say that these computers would be higher to humans mentally. Humans
would still have the ability to "think" that the computers wouldn't. However, given a good
program that would allow the robots to adapt to new situations and the absolute
processing power of these machines, humans would have a disadvantage. A large
number of machines such as these could easily take over control of the Earth.

There certainly are an enormous number of factors that haven't been considered, but the
point is that the controversial idea of robots actually thinking for themselves is not even
relevant. In this example, well programmed, but non-thinking robots could potentially
take over the Earth.

So, consider what happens if man could produce an computer that is more or less exhibited after humans. It
could be of its existence, have a to survive, a desire to and be in a mechanical shell that is
physically superior to humans. This computer might not be nor does it have to have a It just has to be
programmed with these and other characteristics. This computer will know its abilities and those of man, and will know the
flaws as well.
These computers as a combined unit may decide that humans have mucked up the Earth enough. If the robots are going to
survive for any length of time, humans must be removed. To put it honestly, if this happened, will be attached.
Though the idea of thinking robots, or even non-thinking ones, taking over the Earth. Even the capability of a robot to kill a
few people should be a concern.
This is where Asimov's laws of robotics come into play. The vision of making these laws as deeply into these robots as
Asimov talks about may be theoretically hard to achieve, but I am sure that there would be a way of applying something
similar. Doing this guarantees that robots would be the slaves of man, rather than the other way around.

One concern of Asimov's laws is that these slave robots would physically create other robots where the laws were not
implanted. However, this is not possible, since these slave robots could not have the "desire" to produce robots that could
possibly harm humans. If they did, according to Asimov's first law, they would be injured themselves. Knowing that they
would be injured, they couldn't go through with it, because this would disrupt the third law.

The biggest problem of Asimov's laws, though, is that they can only be completely effective if every robot or computer was
deeply embedded with them. The prospect of some humans creating a robot that did not abide by Asimov's laws is a matter
of real concern, as much as the concern of humans creating some other weapon of mass destruction.

But humans will be humans no matter what anyone does. There is simply no way to keep humans from killing themselves, no
matter what tools they have at their disposal. Surely there would have to be severe penalties for the person that attempts to
create a robot without these laws. But this doesn't solve the problem.

The importance of Asimov's laws is clear, nonetheless. A slightly unstable computer that is mentally more powerful than a
human could create an even more powerful computer much faster than humans could create something in defense. By
implementing Asimov's laws, unbalanced computer couldn't exist. And a computer would only create other,
computers.


Many sciences fiction authors have considered the idea that one day, mechanical beings could be physically, as
well as mentally, superior to humans. These authors also often wonder what would happen if these robot beings simply
decide that humans are unnecessary.

To help alleviate this problem, proposed the which state:

https://www.peoplebank.com.au/blog/2017/11/all-the-ethical-questions-surrounding-ai-and-robot-employees

Asimov's idea is that these rules are so deeply embedded into the of every robot made, that if a robot were to
break one of the rules, its circuitry would actually be physically damaged beyond repair. Assuming this is technically
possible, and was embedded in every robot made, these rules are the only thing that would be sufficient to keep robots
from taking over the control of the world from humans.

Consider a robot that is physically superior to humans. It can move faster, is far stronger, won't as easily, and
doesn't tire. It is also quite aware of its surroundings via sensory devices similar to humans, but potentially much more
accurate. These robots could communicate by a very fast wireless network, and be solar powered. The thought of such a
machine is not that far off, a decade or two at most.

Now consider that this robot has been programmed by some stupid person to kill every human that it sees. There is little a
single human could do to stop it. A group of humans could defeat a few machines, but the machines would have access to
all the same tools as humans would, such as guns and atomic weapons. In the end, if there were enough machines, people
might stand little chance of survival, unless they were armed with robots of their own.

The only area where humans would really hold the upper hand would be in intelligence. The robots could not really
for themselves, and would not have the ability to adapt to new human techniques that would eventually be discovered to
destroy the robots.


If the deadly robots were programmed to consider it nearly as important to keep from being destroyed as to kill people, and
were programmed to look for deficiencies in themselves and their tactics, then it would turn into a battle of who could think
and adapt faster.

Basics of FIRST Robotics - FRC Stuttgart. (2020). Retrieved 23 July 2020, from https://sites.google.com/a/isswiki.de/frc-
stuttgart/basics-of-first

Guevarra, Leah R. “Technology 7: Exploratory (Robotics)”. Batangas State University Main Campus I

Robotics: A Brief History. (2021). Retrieved 28 July 2021, from


https://cs.stanford.edu/people/eroberts/courses/soco/projects/1998-99/robotics/history.html

https://www.peoplebank.com.au/blog/2017/11/all-the-ethical-questions-surrounding-ai-and-robot-employees

Mr. Jefrey C. Menodza


Mrs. Leah R. Guevarra
Mr. Jefrey C. Menodza
Miss Timmy Anne A. Lopez

You might also like