Is AI slowly but surely replacing man?
China’s realistic robot Jia Jia can chat with real humans
She can pose for pictures, too.
Mariella Moon, @mariella_moon
04.17.16 in Robots
She can pose for pictures, too.
Mariella Moon, @mariella_moon
04.17.16 in Robots
The University of Science and Technology of China has recently unveiledan eerily realistic robot named Jia Jia. While she looks more human-like than that creepy ScarJo robot, you'll probably still find yourself plunging head first into the uncanny valley while looking at her. Jia Jia can talk and interact with real humans, as well as make some facial expressions -- she can even tell you off if she senses you're taking an unflattering picture of her. "Don't come too close to me when you are taking a picture. It will make my face look fat," she told someone trying to capture her photo during the presscon - for more, go to https://www.engadget.com/2016/04/17/jia-jia-robot/
Is AI slowly but surely replacing man?
PORT MORESBY: Artificial Intelligence (AI) is developing rapidly and robots have already started replacing man in the workforce.
Will AI power robots to make man obsolete? What do you think?
Read the article reproduced below for more on the issue:
You might be a robot. This is not a joke: opinion
TECH NEWS
Monday, 18 Mar 2019
9:00 AM MYT
by scott duke kominers
Welcome to the future, where we're all robots, Pepper seems to be saying. — Bloomberg |
Whether this prophecy should be viewed as auspicious or menacing is the subject of much speculation and controversy. But there’s no denying that in significant ways, it’s already happening.
As the Stanford University legal scholars Bryan Casey and Mark Lemley put it provocatively in the title of a new paper forthcoming in the Cornell Law Review, “You Might Be a Robot”. They don’t mean that literally, but they’re completely serious. “If you’re reading this you’re (probably) not a robot,” they write, “but certain laws might already treat you as one.”
The problem, Casey and Lemley explain, is that defining what it means to be a “robot” run by “artificial intelligence” is fiendishly difficult. It might be easy to identify Optimus Prime when he’s in battle mode, but suppose he transforms into a truck and you hop in the front seat. Are you now the driver? You might be – US laws sometimes define drivers as a function of where people sit, rather than whether they’re actually guiding the car. And that means that you’ll have liability when Prime runs over some Decepticons.
The Transformers example isn’t far from what’s already happening in the world of self-driving cars. There are real questions, say, about whether someone who’s asleep in a fully automated vehicle is responsible if the car runs into a pedestrian.
Lines blur like this all the time. Is a drone piloted by a human a robot for legal purposes? What about a human driver who relies on route guidance from a smartphone?
Even if legal scholars could agree on how to distinguish humans from machines, precise definitions are unlikely to stand up to the pace of technical change. Some states, for example, have laws requiring all cars to have drivers. That presumably made sense when only humans could drive, but now it means that companies have to put humans in the driver’s seats of any self-driving cars they test.
A legal definition of artificial intelligence that perfectly captured the AI field as it exists today would probably be rendered useless by the next technological advance. If laws defined AI as it existed a decade ago, they’d probably have failed to cover today’s neural networks, which are used to teach machines to recognise speech and images, to identify candidate pharmaceuticals, and to play games like Go.
If scholars and lawmakers instead resort to broad definitions meant to cover unanticipated developments, there could be unpleasant unintended consequences. As Casey and Lemley note, “a surprisingly large number of refrigerators” are technically “federal interest computers” under a 1980s cybersecurity statute that didn’t anticipate how widespread computers would become. This sort of over-coverage can strangle innovation, as it forces people developing new technologies to worry about a web of regulations that probably shouldn't actually apply to their work, but nevertheless do.
So what’s the way out? Casey and Lemley suggest looking to one of the earliest scholars of artificial intelligence, Alan Turing, for advice. Turing suggested a functional test: A machine is “intelligent” whenever its behaviour is indistinguishable from a human’s.
Similarly, laws could regulate robots based on actions, rather than the way they’re constructed. Rather than trying to figure out just how “robotic” a car is, for example, rules of the road could depend on functional measures like safety.
That’s a variant of the strategy the US Congress took with 2016 Better Online Ticket Sales Act, which fought scalpers’ ticket-buying bots by restricting efforts to get around captchas and other bot-busting protocols on ticketing websites rather than by trying to define what a “scalper bot” actually is.
It’s also important to embrace the difficulty of defining robots, rather than trying to work around it. Whenever possible, laws of robotics should be defined inductively, using case-by-case approaches. Enforcement should be delegated to regulators, who can adapt implementation more flexibly than legislators can.
Otherwise, we might find ourselves stuck with regulations that can’t tell humans from Cylons. Those aren’t the laws we’re looking for. – Bloomberg
(Scott Duke Kominers is the MBA Class of 1960 Associate Professor of Business Administration at Harvard Business School, and a faculty affiliate of the Harvard Department of Economics. Previously, he was a junior fellow at the Harvard Society of Fellows and the inaugural research scholar at the Becker Friedman Institute for Research in Economics at the University of Chicago.) - Bloomberg/The Star
Comments
Post a Comment