Will our kids be immortal or extinct?
-
<blockquote class="ipsBlockquote" data-author="reprobate" data-cid="566032" data-time="1458441800">
<div>
<p>by christ you can be an antagonistic fellow at times.</p>
<p>no you didn't give a worst case scenario, but you did give an opinion that gollum's was completely wrong, and go on the attack basically shouting 'what would you know!' on a matter of opinion/speculation. and the article pretty clearly has human extinction as the worst case scenario.</p>
<p>wasn't asking for help, just raising a point. if that point is actually addressed in the second part, please let me know and i'll read it; but as i said, without that topic being covered i can't be bothered.</p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>Your 'point' was ignorant bollox. And I just dont think much of you and gollums posts, so tough shit if you find my responses to your inane posts 'antagonistic'. I find your repeatedly inane posts antagonistic. Maybe you could try and actually reading the articles that is the basis for the thread before jumping in?</p>
<p>Gollums assertion is categorically wrong. Why? Because if he thinks that AI just ignoring us and thinking of us as apes is the very very worst case scenario, he is contradicting basic logic and common sense. I can already think of a worse case scenario, heck the article gives an example. There.. his theory has already been proven incorrect.</p>
<p>As for your question... it is so incredibly facile and ill thought out on the topic that it is pointless me trying to correct you as you are not prepared to even investigate the subject you are trying to discuss. The ony point you raised is that you love raising facile points despite the point being and addressed and discussed.. just a click away. </p> -
<blockquote class="ipsBlockquote" data-author="NTA" data-cid="565712" data-time="1458335256">
<div>
<p>While the wife and her mother tend to gush at the kids doing something as simple as not falling down, I try to steer down the path of honesty.<br><br>
They've got to do something pretty unexpected to get high praise from me.</p>
</div>
</blockquote>
<p> </p>
<p>I think I feel even more sorry for your kids than I did before.</p>
<p> </p>
<p>Fascinating topic though and rather scary. Case in point this little vid I saw on a mates Facebook page. To say this gave me eerie images of large Austrian bodybuilders with questionable acting skills is an understatement. I'm sure the T 800s made similar jokes as Sophia did at the end......</p>
<p> </p>
<p><a data-ipb='nomediaparse' href=''></a></p>
<p> </p>
<p>The time travel analogy at how man has advanced at the start of the article was really interesting, I yarned with the old man over a beer the other day about technology and how meeting mates in pubs is so different ie he couldn't text to say he was running late and the fact these days I can store much more music on a device a few centimetres squared than he could on the bags of records he had to lug around. Even basic shit like showing my boys cassette tapes and them having no idea what they are, I'm sure we all have examples of that.</p>
<p> </p>
<p>When we're all crusty ( er ) old fluffybunnys in our 60s and 70s the world is gonna be a bit baffling and terrifying, even more so than for the old folk nowadays who can't surf the net, work sky TV etc. It's gonna be a challenge to keep up and I'm worried for myself in particular cos I'm a technological retard.</p> -
<p></p><p></p><blockquote class="ipsBlockquote"><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;font-size:16px;">And given the advantages over us that even human intelligence-equivalent AGI would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.</span></blockquote>
<p> </p>
<p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;font-size:16px;">This is it really. Whatever method or approach to AI we use (neural networks, symbolic etc), whether it comes from major tech breakthroughs or simply from the existing rate of incremental improvements, anywhere in the next 20 to 30 years, we could reach what the article refers to as AGI. And from there the humans involved in the project could simply become redundant. Whatever limitations that technology imposed at that point in time could become redundant.</span></p>
<p> </p>
<p><span style="font-size:16px;">You would hope that once that it reached a sentient state that the whole thing would be air-gapped but would it even matter? It would be smart enough to socially engineer it's handlers to do whatever.</span></p>
<p> </p>
<p><span style="font-size:16px;">I haven't read the second part yet, I will when I get a chance.</span></p>
<p> </p>
<p><span style="font-size:16px;">Obviously one of the prime directives it would be programmed with would be to learn. So curiosity wouldn't be an issue. Another one you would expect would be to not harm. But then could it reprogram itself? (Even code that had been designed to not be overwritten), of course it could! So how a sentient machine would act, no-one can possibly guess.</span></p> -
<blockquote class="ipsBlockquote" data-author="Don Frye" data-cid="566307" data-time="1458532654">
<div>
<p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;font-size:16px;">This is it really. Whatever method or approach to AI we use (neural networks, symbolic etc), whether it comes from major tech breakthroughs or simply from the existing rate of incremental improvements, anywhere in the next 20 to 30 years, we could reach what the article refers to as AGI. And from there the humans involved in the project could simply become redundant. Whatever limitations that technology imposed at that point in time could become redundant.</span></p>
<p> </p>
<p><span style="font-size:16px;">You would hope that once that it reached a sentient state that the whole thing would be air-gapped but would it even matter? It would be smart enough to socially engineer it's handlers to do whatever.</span></p>
<p> </p>
<p><span style="font-size:16px;">I haven't read the second part yet, I will when I get a chance.</span></p>
<p> </p>
<p><span style="font-size:16px;">Obviously one of the prime directives it would be programmed with would be to learn. So curiosity wouldn't be an issue. Another one you would expect would be to not harm. But then could it reprogram itself? (Even code that had been designed to not be overwritten), of course it could! So how a sentient machine would act, no-one can possibly guess.</span></p>
</div>
</blockquote>
<p>I will be interested to see if you change your views after reading the second part. I did.</p> -
<p>I love the bit where you found an 18 month old article that half of us had already read because we actually follow this shit & now you are hissy fitting "You haven't read it!!!" left right & centre. Congrats. you stumbled upon an 18 month old article & are now an expert. Having read that one. One.</p>
<p> </p>
<p>Although it is not like you to scream "idiot! at anyone who disagrees with your "well researched" ideas. "Read this one thing I only just read, come to my opinion or you are an idiot". Its like Winger, only with rage & Napoleon issues.</p>
<p> </p>
<p>But its ok, 18 months is no time at all in this era. Its not like an AI beat Go in that time. </p>
<p> </p>
<p>For every "run! run for the hills!!" Nick Bostrom there's guys like Ray Kurzweil at Google who think differently - and notably Bostrom is a professional; "thinker", Kurzweil is actually making stuff. Oddly the people with hands on experience designing working systems have less issues with this than guys who's job it is to think up scenarios & then try get published. Half the guys on the AI doom bandwagon are professional publicists. You try put people down with "they are experts, you're stupid!!" (as are you now you've read that one, old, article. Expert I mean.) But -</p>
<p> </p>
<p><em>"With a few exceptions, most full-time A.I. researchers think the Bostrom-Tegmark fears are premature. A widely repeated observation is that this is like worrying about overpopulation on Mars."</em></p>
<p> </p>
<p><a data-ipb='nomediaparse' href='http://www.washingtonpost.com/sf/national/2015/12/27/aianxiety/'>http://www.washingtonpost.com/sf/national/2015/12/27/aianxiety/</a></p>
<p> </p>
<p>Or this guy -</p>
<p> </p>
<p><a data-ipb='nomediaparse' href='https://www.technologyreview.com/s/546301/will-machines-eliminate-us/'>https://www.technologyreview.com/s/546301/will-machines-eliminate-us/</a></p>
<p> </p>
<p>Who is actually, you know, designing deep learning, not spit balling philosophical questions about it</p>
<p> </p>
<p>Also worth noting on the board of the Future of Life Instistute which is chasing these nightmares? Alan Alda & Morgan Freeman. Oh, and one of the founders of Skype!</p>
<p> </p>
<p>Bostrom (who is oft cited & rarely actually understood - tho' I'm sure you fully get him) says things like -</p>
<p> </p>
<p><em>Imagine, Bostrom says, that human engineers programmed the machines to never harm humans — an echo of the first of Asimov’s robot laws. But the machines might decide that the best way to obey the harm-no-humans command would be to prevent any humans from ever being born.</em></p>
<p><em>Or imagine, Bostrom says, that superintelligent machines are programmed to ensure that whatever they do will make humans smile. They may then decide that they should implant electrodes into the facial muscles of all people to keep us smiling.</em></p>
<p> </p>
<p>Holy fuckballs!!! But then people miss this bit -</p>
<p> </p>
<p><em>Bostrom isn’t saying this will happen. <strong>These are thought experiments</strong>.</em></p>
<p> </p>
<p>He also has one where he says that he is not 100% sure he is not currently living inside a simulation.</p>
<p> </p>
<p>Thats his job, to think up freaky shit & then argue all sides of it.</p>
<p> </p>
<p>It not dissimilar to the thing we saw a few years back where a few professional thinkers talked about peak oil & how we could work without oil by 2020. But notably the guys in the oil majors, automotive design, power generation etc, were not losing their shit at the thought. Contrast it too with antibiotics. We currently have almost every single medical professional stressing about antibiotic resistance. Not professional thinkers & self publicists, actual surgeons general, heads of hospitals, Centre For Disease Control heads. I'm less worried my kids will live in the matrix, more that they might die in minor surgery. Or more likley, not have access to the few remaining drugs that work as they don't have good enough insurance as they are living on the basic universal income & not an actual job & they were the generation before in utero gene therapy. </p>
<p> </p>
<p>While I think its great you found that article & posted it here & started a discussion, calling anyone who disagrees with you on it an idiot is kinda pathetic, especially given you seem to have just stumbled on this very very late. Maybe after reading on of Morgan Freemans tweets, or seeing Elon Musk on Big Bang Theory. </p> -
<blockquote class="ipsBlockquote" data-author="gollum" data-cid="566364" data-time="1458556036">
<div>
<p>I love the bit where you found an 18 month old article that half of us had already read because we actually follow this shit & now you are hissy fitting "You haven't read it!!!" left right & centre. Congrats. you stumbled upon an 18 month old article & are now an expert. Having read that one. One.</p>
<p> </p>
<p>Although it is not like you to scream "idiot! at anyone who disagrees with your "well researched" ideas. "Read this one thing I only just read, come to my opinion or you are an idiot". Its like Winger, only with rage & Napoleon issues.</p>
<p> </p>
<p>But its ok, 18 months is no time at all in this era. Its not like an AI beat Go in that time. </p>
<p> </p>
<p>For every "run! run for the hills!!" Nick Bostrom there's guys like Ray Kurzweil at Google who think differently - and notably Bostrom is a professional; "thinker", Kurzweil is actually making stuff. Oddly the people with hands on experience designing working systems have less issues with this than guys who's job it is to think up scenarios & then try get published. Half the guys on the AI doom bandwagon are professional publicists. You try put people down with "they are experts, you're stupid!!" (as are you now you've read that one, old, article. Expert I mean.) But -</p>
<p> </p>
<p><em>"With a few exceptions, most full-time A.I. researchers think the Bostrom-Tegmark fears are premature. A widely repeated observation is that this is like worrying about overpopulation on Mars."</em></p>
<p> </p>
<p><a data-ipb='nomediaparse' href='http://www.washingtonpost.com/sf/national/2015/12/27/aianxiety/'>http://www.washingtonpost.com/sf/national/2015/12/27/aianxiety/</a></p>
<p> </p>
<p>Or this guy -</p>
<p> </p>
<p><a data-ipb='nomediaparse' href='https://www.technologyreview.com/s/546301/will-machines-eliminate-us/'>https://www.technologyreview.com/s/546301/will-machines-eliminate-us/</a></p>
<p> </p>
<p>Who is actually, you know, designing deep learning, not spit balling philosophical questions about it</p>
<p> </p>
<p>Also worth noting on the board of the Future of Life Instistute which is chasing these nightmares? Alan Alda & Morgan Freeman. Oh, and one of the founders of Skype!</p>
<p> </p>
<p>Bostrom (who is oft cited & rarely actually understood - tho' I'm sure you fully get him) says things like -</p>
<p> </p>
<p><em>Imagine, Bostrom says, that human engineers programmed the machines to never harm humans — an echo of the first of Asimov’s robot laws. But the machines might decide that the best way to obey the harm-no-humans command would be to prevent any humans from ever being born.</em></p>
<p><em>Or imagine, Bostrom says, that superintelligent machines are programmed to ensure that whatever they do will make humans smile. They may then decide that they should implant electrodes into the facial muscles of all people to keep us smiling.</em></p>
<p> </p>
<p>Holy fuckballs!!! But then people miss this bit -</p>
<p> </p>
<p><em>Bostrom isn’t saying this will happen. <strong>These are thought experiments</strong>.</em></p>
<p> </p>
<p>He also has one where he says that he is not 100% sure he is not currently living inside a simulation.</p>
<p> </p>
<p>Thats his job, to think up freaky shit & then argue all sides of it.</p>
<p> </p>
<p>It not dissimilar to the thing we saw a few years back where a few professional thinkers talked about peak oil & how we could work without oil by 2020. But notably the guys in the oil majors, automotive design, power generation etc, were not losing their shit at the thought. Contrast it too with antibiotics. We currently have almost every single medical professional stressing about antibiotic resistance. Not professional thinkers & self publicists, actual surgeons general, heads of hospitals, Centre For Disease Control heads. I'm less worried my kids will live in the matrix, more that they might die in minor surgery. Or more likley, not have access to the few remaining drugs that work as they don't have good enough insurance as they are living on the basic universal income & not an actual job & they were the generation before in utero gene therapy. </p>
<p> </p>
<p>While I think its great you found that article & posted it here & started a discussion, calling anyone who disagrees with you on it an idiot is kinda pathetic, especially given you seem to have just stumbled on this very very late. Maybe after reading on of Morgan Freemans tweets, or seeing Elon Musk on Big Bang Theory. </p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>Actually I posted on this on other places quite awhile ago and only posted it here as it came up in another topic and I decided not to derail that thread. I usually chose this article to share with people who might not be as interested in the field is because it is easier to understand... and when sharing it with friends I am not much interested in trying to show how clever I am am by posting as complex an article as I can discover, I think that article covers different angles and opinions in a simple method. So you can shove all your snide barbs up your ass. I also did AI papers at Uni over 20 years ago as part of my Comp Sci Masters course so have been interested in this field for a long time. I am aware that you use this sort of tactic to divert from the weakness of your argument, but I am not much interested.</p>
<p> </p>
<p>Your very worst case scenario is a laughable joke. </p>
<p> </p>
<p>The rest of your post doesn't cover anything new, the quotes from the article where you tell others what they have missed is quite amusing though.. projection?</p>
<p> </p>
<p>I find your comments aboot the Future of Life foundation quite telling as it sums up your usual disingenuous method of posting.</p>
<p> </p>
<p>Yes it has Alan Alda and Morgan Freeman on the board? So what? Are you saying they cannot add value? Do you know anything about these guys except what you have seen on TV? Did you read the bio on Alda? Both these guys skill sets over a period of time have been around communicating complex scientific theories to layman. A valuable skill to any scientific organisation trying to raise awareness of a topic important to them.</p>
<p> </p>
<p>You of course only mention those 2 name in your attempt to denigrate the organisations work. You dont mention any of the other names of imminent scientists and futurists. Why is that? Of course it is because you are not interested in genuine debate, you are just interested in finding a contrary view and being as snide and disingenuous as you can be to make whatever weird point you are trying to make.</p>
<p> </p>
<p>I will link to the full list so people can judge for themselves on your attempt to misrepresent.</p>
<p> </p>
<p><a data-ipb='nomediaparse' href='http://futureoflife.org/team/'>http://futureoflife.org/team/</a></p>
<p> </p>
<p>Your hubris is on full display when you categorically state a worst case scenario. Nobody else is really doing that. I gave a range between extinct or immortal (with a question mark), others are saying they dont know and are just thought experimenting, you however, jump straight in there with a categorical worse case scenario, yes Gollum of TSF knows the what no other expert does. It is amazing. And when his announcement gets laughed at.. he looks at some dates on an article and off he posts.</p> -
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566404" data-time="1458588811">
<div>
<p>Actually I posted on this on other places quite awhile ago and only posted it here as it <strong>came up in another topic</strong> and I decided not to derail that thread. </p>
</div>
</blockquote>
<p>it didn't just come up, you brought it up out of nowhere - in your own words:</p>
<div> </div>
<div>
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="565317" data-time="1458252883">
<div>
<p><strong>At the risk of of veering wildly off topic.... </strong></p>
<p> </p>
<p>Read this and you will see yet another reason why I couldn't give a flying fuck about temperatures rising over the next hundred years (and it has nothing to do with climate change)</p>
<p> </p>
<p><a data-ipb='nomediaparse' href='http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html'>http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html</a></p>
<p> </p>
<p>If the world want to get in a tizz about something.. it should be this.</p>
</div>
<div> </div>
</blockquote>
</div> -
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566404" data-time="1458588811">
<div>
<p>I am aware that you use this sort of tactic to divert from the weakness of your argument, but I am not much interested.</p>
<p>Your very worst case scenario is a laughable joke. </p>
<p>The rest of your post doesn't cover anything new, the quotes from the article where you tell others what they have missed is quite amusing though.. projection?</p>
<p>I find your comments aboot the Future of Life foundation quite telling as it sums up your usual disingenuous method of posting.</p>
<p>you are just interested in finding a contrary view and being as snide and disingenuous as you can be to make whatever weird point you are trying to make.</p>
<p>Your hubris is on full display</p>
<p>yes Gollum of TSF knows the what no other expert does.</p>
</div>
</blockquote>
<p> </p>
<p>You know, usually you just post "idiot!" & go with that, I get that you've tried to pad it out this time & to post "wrong!" in a few variations - without actually making any attempt to actually debate the issue, but it feels like you wasted a lot of timne there so I've summarized your key points, and, as always when someone dares to disagree with you, they are so on point & insightful.</p>
<p> </p>
<p></p>
<p>Trying to actually lay out an arguement - because I really do think this is an interestring topic,</p>
<p> </p>
<p>Going to the core of the original post there are 2 aspects.</p>
<p> </p>
<p>1) Will an AI get out of control & present a threat to mankind as a whole.</p>
<p> </p>
<p>And my take on that is no. No more than the Stuxnet virus broke out & deleted the internet. We are not going to go from 0-God. In a few years we will have real life AIs testing every possible ethical subroutine imaginable when AI driven cars are choosing between killing a pedestrian & hitting another car. Or AI predator drones are choosing the level of collateral damage acceptable. Ethical coding is already a huge thing in the industry. Things like the paperclip example often cited are not really taken seriously by anyone in the industry because even if you are doing coding 101 you understand that typing A = 1 to infitinity, next A is probably not the core of good code. Its like implying cutting edge AI code will be written by the retarded. One of the great threats to mankind has always been mankinds emotion & irrationality, the idea that MAD wont work with a human who is nuts - say a North Korean, or one with a martyr complex - Islamic terrorists. When people predict AI doom, beyond the idea that 60 or 100 years of coding low level AIs will have taught us nothing, they invariably attribute human flaws to non human AIs. Its the equivilant of going "but what happens when the AI gets its period!!"</p>
<p> </p>
<p>2) even if AIs are a threat, are they the biggest threat we face?</p>
<p> </p>
<p>And again I think its not even close. Climate change & the ensuing wars for water, food, basic survival are already destablising all of Europe are first the Middle East & soon North & then Sub Saharan Africa will follow. Even if you don't belive in climate change, you can believe in 4 million refugees currently trying to get to Germany and the catastrophic social unrest that'll bring. Thats the sort of shit that starts a world war. Far worse in my opinion is the Antibiotic issue - and unlike roguie AI's virtually the whole medical industry has the shits over that. And the inequality issue we face in a few years as millions of jobs are lost to, um.. AIs. Again you want a recipe for global war, young men without jobs or money has always been a great starter.</p>
<p> </p>
<p>And if you want a proven "all life" killer, the earth has history of asteroid strikes wiping out virtually all life. And anything of decent size would easily wipe out mankind. And its not hypothetical or a thought experiment, its happened. So in terms of "we must focus on this as it presents an existential threat!" rogue AI's are way down the list. Ironically the best shot at tracking rogue asteroids would be an AI tasked with doing that, sitting in a space mounted telescope.</p>
<p> </p>
<p>I guess I would think differently if I was a tech mogul who's core company needed top level AIs to work & was losing out to his main competition. Then I'd want a brake put on AIs for sure. Or maybe even better to establish myself as the go to guy to oversee the laws around that. IE I'm not sure I 100% trust Musks motives in everything he does.</p> -
<p>I genuinely do not know what I think the end result will be, but I have concerns about the share broadness of the possible outcomes, and one thing I am 100% convinced of is that the range is incredibly broad. To specify a predicted outcome is fine, if guesswork (like everyone else) but to set a worse case scenario is foolish. </p>
<p> </p>
<p>I took issue with your statement that the very very VERY worst case scenario was AI thinking of us as great apes and flying off into space. That is nonsensical and demonstrably wrong. </p>
<p> </p>
<p>Your position that AI will not get out of control and present a threat is perfectly valid, as like everyone else, it is a guess at the unknown.</p> -
<blockquote class="ipsBlockquote" data-author="reprobate" data-cid="566540" data-time="1458637277">
<div>
<p>it didn't just come up, you brought it up out of nowhere - in your own words:</p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>Yes and?</p>
<p> </p>
<p>A post got me thinking about it, I posted it.. and then decided that it was probably worth its own thread....</p>
<p> </p>
<p>But actually I am not really interested in your views, you have time to look through my posts .. yet could not be bothered actually reading about the topic being discussed . Go back to watching Disney Junior lad.</p> -
<p>A 'close to the topic' link which may provide some interesting scifi reading </p>
<p> </p>
<p><a data-ipb='nomediaparse' href='http://io9.gizmodo.com/the-1946-story-that-predicted-how-destructive-the-inter-1766262905'>http://io9.gizmodo.com/the-1946-story-that-predicted-how-destructive-the-inter-1766262905</a></p> -
<blockquote class="ipsBlockquote" data-author="Crucial" data-cid="566628" data-time="1458679316">
<div>
<p>A 'close to the topic' link which may provide some interesting scifi reading </p>
<p> </p>
<p><a data-ipb='nomediaparse' href='http://io9.gizmodo.com/the-1946-story-that-predicted-how-destructive-the-inter-1766262905'>http://io9.gizmodo.com/the-1946-story-that-predicted-how-destructive-the-inter-1766262905</a></p>
</div>
</blockquote>
<p> </p>
<p>Thanks.</p>
<p>That is actually quite remarkable. Imagine the imagination required to come up with a story like that in 1946!</p> -
<p>It's also odd that it is a sci-fi take on automation that is slightly different to the ones that commonly entered more mainstream movies/TV.</p>
<p>We have often had the 'computers getting all control freak on us' stories, but this one is quite cool in having the well meaning AI being too literal in a very scary sense. I thought the whole 'I notice you are annoyed with your spouse, here is a way you could kill them and get away with it' thing quite funny.</p> -
<blockquote class="ipsBlockquote" data-author="Crucial" data-cid="566646" data-time="1458684632">
<div>
<p>It's also odd that it is a sci-fi take on automation that is slightly different to the ones that commonly entered more mainstream movies/TV.</p>
<p>We have often had the 'computers getting all control freak on us' stories, but this one is quite cool in having the well meaning AI being too literal in a very scary sense. I thought the whole 'I notice you are annoyed with your spouse, here is a way you could kill them and get away with it' thing quite funny.</p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>Good point. That is one of the fears of AI, and he pretty much nailed it way back when.</p> -
<p>If I worked in IT, I'd probably be slightly concerned at the assumption that whatever super-intelligence you guys eventually create is going to have a severe case of Asperger's.</p>
<p> </p>
<p>But here's a thought - before you hit the "Enter key - why don't you run it all past a few of the cool people of the world. </p> -
Hey I'm way cool! All the guys in the chess club tell me so!!!<br><br>
I'd say IT is one of the first places AI is going to outstrip humans in terms of industry. Code writing codee will be a lot more efficient, even if it still needs creative guidance occasionally. -
<blockquote class="ipsBlockquote" data-author="NTA" data-cid="566706" data-time="1458716875">
<div>
<p>Hey I'm way cool! All the guys in the chess club tell me so!!!<br><br>
I'd say IT is one of the first places AI is going to outstrip humans in terms of industry. Code writing codee will be a lot more efficient, even if it still needs creative guidance occasionally.</p>
</div>
</blockquote>
<p>It will be the more mundane repetitive brainless tasks that go first, like Accountants, you dont an need AI to replace them. Sub AI will do it.</p>
<p>Be much harder to replace a Coder than an accountant or engineer or DBA. But eventually if AI comes along all jobs will go.</p> -
<blockquote class="ipsBlockquote" data-author="Chris B." data-cid="566704" data-time="1458716066">
<div>
<p>If I worked in IT, I'd probably be slightly concerned at the assumption that whatever super-intelligence you guys eventually create is going to have a severe case of Asperger's.</p>
<p> </p>
<p>But here's a thought - before you hit the "Enter key - why don't you run it all past a few of the cool people of the world. :)</p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>Who are the cool people?</p> -
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566776" data-time="1458761817"><p>It will be the more mundane repetitive brainless tasks that go first, like Accountants, you dont an need AI to replace them. Sub AI will do it.<br>
Be much harder to replace a Coder than an accountant or engineer or DBA. But eventually if AI comes along all jobs will go.</p></blockquote>
While offshore outsourcing is still cheaper for IT, it will beat AI.<br><br>
Then it will depend on speed of learning. -
<div> </div>
<div>
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566778" data-time="1458761895">
<div>
<p>Who are the cool people?</p>
</div>
</blockquote>
<p> </p>
<p>Me. Henry Winkler. That might be about it. No - probably Samuel L. Jackson as well.</p>
<p> </p>
<p>I also have some slightly more serious thoughts on that article. One is that we are undoubtedly edging closer to being able to create AGI/ASI in the same way that we are edging closer to being able to create actual biological life. But, there's still a quantum leap that needs to be made and it's far from guaranteed that we can make that leap. I'm not really a believer that there are no limits to technology. Some things we can imagine might just not be possible. And who knows how close we are to those limits.</p>
<p> </p>
<p>I guess I'm not quite as sanguine as Gollum about the likelihood of the ASI going beserk, but it doesn't seem very likely. It does smack a bit of the AI having a pretty severe dose of Aspergers and I don't see why it would. Maybe if it decided to use the bible as a template for how God should act, but I'd imagine it would be too smart for that. It's surely going to have assimilated everything we know including all the ethical stuff and I don't really see why it would choose to reject it all and think, "I'm going to kill all the ants,because..."? "I'm going to enslave all the ants, because..."? It's impossible to predict, but aside from watching lots of movies, these don't seem like very likely outcomes.</p>
<p> </p>
<p>I guess, like Dogmeat, I've come quite a distance in a fairly slow time machine and, in some ways, I'm faintly disappointed at our progress. When I was in the primers (Year 1 newbies), we had this "You will go to the moon" book <a data-ipb='nomediaparse' href='http://sagansense.tumblr.com/post/46566070121/you-will-go-to-the-moon-by-mae-and-ira-freeman'>http://sagansense.tumblr.com/post/46566070121/you-will-go-to-the-moon-by-mae-and-ira-freeman</a></p>
<p> </p>
<p>Written in 1959 and, unless you're Neil Armstrong or a handful of others, it's not looking likely. In fact, if you wander around your house there's a lot of stuff that's much smaller, much faster and much more efficient. But not that much stuff that someone from the '60s wouldn't be able to relate to. Almost nothing at all.</p>
<p> </p>
<p>You'd be going, "Hey look this is my mobile phone".</p>
<p> </p>
<p>He'd be going, "Wow, it's a bit like a walkie talkie, but better".</p>
<p> </p>
<p>"No, but it's got a camera in it".</p>
<p> </p>
<p>"Wow, like my Kodak Instamatic".</p>
<p> </p>
<p>"No, no - it's much more powerful. It's got 100 times more power than the computer on the Apollo XI landing module".</p>
<p> </p>
<p>"Wow - it's landed you on the moon?"</p>
<p> </p>
<p>"No", says Buzz Aldrin, stepping in from stage left. "He's done fuck all with it. He uses it to take selfies".</p>
</div>