Make your initial response by Thursday Midnight.
Wait until Friday (until all others have made responses) to make thoughtful replies to at least two classmates (by Sunday midnight).
Isaac Asimov, in his 1942 short story 'Runaround' proposed "The Three Laws of Robotics" long before we had anything resembling the complexity and autonomy of robotic systems we have today.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Think about these laws in terms of the robotic/autonomous systems we have today (and currently in development), such as auto-driving cars, and the jumbo jets that are often allowed to fly and land themselves.
Also, the military has drone aircraft capable of carrying out a reconnaissance mission, or perhaps even a bombing mission with no human interaction after the mission has been defined.
Even our vacuum cleaners are getting smarter all the time.
Are Asimov's 3 laws worthwhile to consider in the development of newer advanced autonomous robotic systems?
What are some obstacles to implementing all 3 laws?
In what situations should these laws NOT be considered relevant?
What kinds of autonomous robots (other than floor-cleaning) do you imagine might be utilized in office buildings within the next 5-10 years? How about in homes?