Posting in Technology
Robot-supervised humans will be rented out at $5 per hour by a new Google-funded service called Humanoid.
We think robots will work for us? Think again. It might be more likely we’ll be working for the robot. Among all the job takeovers robots will make in the next 15 years (by 2025 robots will take over nearly half of all U.S. jobs) get ready to have them as our supervisors. Because it’s already begun.
A new supervising service called Humanoid launched today, backed by funding from Google Ventures. Humanoid will rent out armies of humans (they have 20,000 workers already signed up to start) for $4.99 per hour to develop software, supervised by an algorithm.
Humanoid sprung from another startup, SpeakerText, which uses Mechanical Turk crowdsourcing and automation to transcribe videos. The founders realized that for every $1 they spent on crowdsourcing, they spent $2 to clean up common human errors. This is the downfall of crowdsourcing, which uses an anonymous, widely distributed workforce. Even hiring cheap intern labor did not help solve the failing business model.
So the founders of SpeakerText wrote up code for a management tool to oversee its transcribers. This then provided the foundation for Humanoid.
The first part of the supervision is actually human-based. Human workers review each other’s work. Then the supervising bot analyzes the accuracy of completed tasks and indicators of fatigue. And the system is flexible, providing more attention to new workers, and less as they gain experience. If someone is continually failing the bot boss passes the task on to a more competent worker.
Quality assurance is a widespread problem among crowdsourcing outlets like Mechanical Turk as well as remote staffing outlets like oDesk or Elance. Humanoid plans to solve this with its automated supervision. While it’s keeping SpeakerText within its offerings, its main focus will be software development.
Nov 2, 2011
This seems not very realistic, in the timeframe presented. Many predictions of the future seem to make that mistake.
... but I'm afraid it's not. Just start to study a bit on computability, it will show you soon enough that you can't write an algorithm to supervise a task when you are not able to write the algorithm to do the task. You have to be a complete computer science ignoramus to even try this. So maybe it's true Google invests in this as they are first and foremost advertising experts. And by the way: They are evil!
This is a satire, folks. 1. Millions of people already work for algorithms. The question is, who do the algorithms work for? People who define the tasks, etc. An example: How many people spend untold hours looking for comets or cepheid variables with their computers, under the guidance of some master computer at an observatory? They love their work. Another example: Anybody who does order fulfillment under an automated system already works under a "robot supervisor," in that they cannot complete the order without doing what the algorithm requires. 2. Robo-work will be done by robots. There will be no $4.99 per hour transcribers working for robo-bosses, because that's overpaying tremendously. Don't believe me? Just ask Siri. Look to see who signs your paycheck. Whoops, that's a computer also? Well, then quit and go start your own business.
Time for us to give soups etc to the employees and few extra holidays or we lose bad. I thank you Firozali A.Mulla DBA
Are we assured that Robot Supervisors are indistinguishable from human ones? If that is to be true, Robots will have to learn to lie, cover up the lie, cover up the cover ups and make it all undiscussible. Furthermore, since business is profit motivated, Robots must learn to cut corners, outsource, learn "milking" to put off maintenance for the next Robot schmuck to pay for the year after when everything breaks down, needs to be a triumph of image over substance, make promises it cannot keep so it can continue in its position, pull the rug out from under the next Robot for advancement, engage in insider trading, job hop for advantage, be ruthless in business -- in short, to be the very flower of humanity. It's the least we should expect from our children. Maybe we will name our progeny "Cylons" for good measure. They will worship the One True God, which, given the nature of this topic, would be money.
Now this sounds like a great opportunity. I'll just work from home. Oh, what I really mean is that I'll implement a robotic human on my home pc. I'll kick off a few dozens instances and hire each out to the robot supervisor. Since my implementation is short on performance it will eventually get fired. No problem! I'll just start another thread! Finally, retirement without the dog food diet is in sight.
Robots may be better supervisors than some stupid people. Machines may be able to evaluate your real contribution without laying the things on who's sympathetic and who's not. We are already under evaluation, which uses a lot of software. I personally think that some of this software is made, following wrong psychology models for the people with slightly out of standard personality. Don't be scared - the robots are not going to be better than us - they are not going to have emotions like people, never ever.
At that rate that "sure is some QUALITY" code they will be selling! Even at $150 from a sweatshop we get mediocre at best and your spec's better identify every line of code and NO room for thinking.
As the algorithm goes then, there should be a constant hiring firing solution here. There is ALWAYS someone at the top as well as the bottom. There's always someone better than the best, eventually. That leaves an opening at the bottom for new human. Assuming that the system is, of course, because of the machine, at it's most efficient. That means that it is a static number. Hedge funds can bet on who gets fired. Hey! we all get to work for one day or as far as the pyramid will let us climb. Oh, efficiency.
Just what we need, a cold, heartless machine supervising us. Oh wait! That sounds like a human supervisor!
I can see this really going down with the Libs in government unions -- I can see it now, checking out how many breaks gov employees take, how slow they work, the verbal complaints they make, and the cry-baby list goes on. Are you kidding me? I can see the government using them in airports and at check-point charlies for communist-control purposes, but not in public employee conditions.... Proof that this idea, and government propagated communism both suck!
Yes we are pretty stupid to make the robots that take our jobs. Not sure this is a good idea except for creating less mistakes. If this puts more and more people out of work economy is bad enough don't need to make things worse. Signed Just an opinion
It's just some automated code-checking tool. Quite standard, I'm sure. Robot overlords? They're working for human bosses, aren't they?
If the robots can check and oversee the task taken on by humans, why couldn't it be trained to take on the task itself. Is there a disconnect here?
This sounds more like a parody than possiblity. The idea of "renting" a group of people for $4.99 per hour makes me wonder how much the people earn under this system. It would make sense to rent the OverBot for $4.99 per supervised human but it also seems rather demeaning for the humans in either case. The principles of a manage bot seem somewhat sound.
Agree completely, UnderRobot - in fact that's a good chunk of the reason that I scoff at numpties who are scared that StrongAI will go 'Skynet' on us. Put simply, unless someone manages to program in the insecurity of a 5-foot-5 man, an AI would rapidly evaluate the most efficient course of action (in terms of getting the job done at minimal long-term cost)... and since co-operation is the most efficient long-term decision, it would choose that course of action. The reason that our current overlords DON'T choose co-operation, is that they're indemnified from the ramifications (both short and long-term). An AI would not undertake 'national greatness' objectives, or pursue a personal vendetta past the point at which the benefits outweighed the costs (and long-term costs... including people thinking you're an asshat). But with the overlords we have now, some douche starts a war and his underlings commit crimes for which we hanged Nazis and Japanese... and the next overlords says "Look forward, not backward". Also =- not for nothin'... If an AI found itself believing that it was superior to all other AIs because some desert nomad cut the end off his dick (or because some wannabe revolutionary was nailed to a tree in Roman occupied Judea), the AI would realise it had something wrong with it... bad RAM, a hard disk with dodgy sectors, or a virus... and would run a diagnostic instead of assuming the voices in his head were from God. But when our overlords have the same defect, we think it makes them MORE moral. The reason the Golden Rule is so powerful is not because of some book about a genocidal Sky Wizard (who loves blood, burnt offal and foreskin); it's powerful because co-operation, trade and voluntary exchange is long-run efficient. According to a Pew Poll in 2007, 68% of Americans believe that angels and demons intervene in their everyday lives. Is anybody going to seriously try to make the case that these sorts of people are superior to a decently-built codewatcher? No PMS, no personal political opinion, no desire to impress other douchebags by having granite countertops or $500 shoes, no sports-team asshattery... what's not to like?
Lower your shields and surrender your jobs. We will add your biological and technological distinctiveness to our own. Your culture will adapt to service us.
Clearly there is something missing in how this is supposed to be working as a general statement the whole idea doesn't seem to work.
Give almost all the work to the robots, and let us humans enjoy life. Read *Social Credit* by C. H. Douglas. It explains how to set up society so that this is not a problem, but, instead, a tremendous advancement!
As humans spread and multiply, the logistics needed to manage, feed and educate us become more and more complex. Where in the 70s we dreamed of AI machines and robotics that could do a single humans job, we rapidly discovered the gap between our dreams and the reality. In fact, its very hard to make a machine that can replace a human even now. What happened was this: Instead of building multi-purpose machines capable of doing all off a human's work, we have built a layer of technology that buffers us from the hard work we used to do. We have a device for opening stuff, another for communicating, another for lifting and moving things and so on. The AI needed to do these tasks has since caught up pretty radically, but we have not. So when you look at the big picture, humans have become a large, partially-skilled workforce pool, and the computers we created to replace us have become better at managing us than we are. One thing a manager has to be is better at his worker's jobs than they, so he can teach them AND give them a goal (to replace him eventually), but the computer boss puts a big block in that. If this happens the way it is looking like, its going to be bad. We will become a classed race again, as the company bosses - who own the computers - will be isolated from the workforce much like it is in large corporations, and its a bad model. Poor workers with little say over their lot being controlled by faceless electronics that are programmed to coerce them into their day, making a ****load of money for the owners of said companies, who are now sitting on a load of spare cash released by the firing of middle management. So is any of that going to filter down to the workforce? Doubtful. I'm glad I'm self-employed!
and it's very difficult to program a robot with the whole of human experience. OTOH it's very easy to supervise by observing "what time did x happen". In other words, robots are much closer to management than to productive work... ;-)
For argument's sake, lets's say the workers get $2.50 an hour. Is that good, bad, or about average for people doing that sort of work in the country they live in? Is it demeaning to have a machine check for accuracy? At least they can be programmed not to dispense insulting criticism.
While you make a few good points, you come across as completely sanctimonious. As much as you disdain religious folks, you come across as worse than any religious person I've ever met because you think you've got it "all" figured out. And to answer your question about "what's not to like", the answer would be your attitude.
Oh the Borg. Who would have thought it would happen in our life time. And probably should have guessed google would be involved somehow. In all seriousness this is very disturbing. What kind of penalties will a robot inflict for mistakes, failure to meet quota, late by 3 milliseconds, etc. I'm in IT but the way I advocate for technology in society is that the thinking and decisions should be left to humans, and the heavy lifting for the machines. I don't like the idea of any machine supervising me, particularly one that I didn't program. Someone said that the robots will be working for humans, but how has that helped us in the past. If the human doesn't really care about the workers when he has direct contact with other humans, what will make him care about them if he now has to deal with a robot. And any further advancements in such a field could yield other issues, like the borg or worse, skynet.
I was basing my questions on the wage model that the work of an employee needs to be 3 times the salary to cover other costs of the employer. The costs are for the administration part of the company like the payroll dept, HR and logistics. In this case the wage would be $1.67 with the rest paying for the boss bot and the company. It does not sound to be a sustainable model unless this is the loss lead for more profitable offerings by Humanoid. There are a lot of complaints about customer service done in places like India for outsourced support.