Will our kids be immortal or extinct?
-
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566404" data-time="1458588811">
<div>
<p>Actually I posted on this on other places quite awhile ago and only posted it here as it <strong>came up in another topic</strong> and I decided not to derail that thread. </p>
</div>
</blockquote>
<p>it didn't just come up, you brought it up out of nowhere - in your own words:</p>
<div> </div>
<div>
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="565317" data-time="1458252883">
<div>
<p><strong>At the risk of of veering wildly off topic.... </strong></p>
<p> </p>
<p>Read this and you will see yet another reason why I couldn't give a flying fuck about temperatures rising over the next hundred years (and it has nothing to do with climate change)</p>
<p> </p>
<p><a data-ipb='nomediaparse' href='http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html'>http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html</a></p>
<p> </p>
<p>If the world want to get in a tizz about something.. it should be this.</p>
</div>
<div> </div>
</blockquote>
</div> -
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566404" data-time="1458588811">
<div>
<p>I am aware that you use this sort of tactic to divert from the weakness of your argument, but I am not much interested.</p>
<p>Your very worst case scenario is a laughable joke. </p>
<p>The rest of your post doesn't cover anything new, the quotes from the article where you tell others what they have missed is quite amusing though.. projection?</p>
<p>I find your comments aboot the Future of Life foundation quite telling as it sums up your usual disingenuous method of posting.</p>
<p>you are just interested in finding a contrary view and being as snide and disingenuous as you can be to make whatever weird point you are trying to make.</p>
<p>Your hubris is on full display</p>
<p>yes Gollum of TSF knows the what no other expert does.</p>
</div>
</blockquote>
<p> </p>
<p>You know, usually you just post "idiot!" & go with that, I get that you've tried to pad it out this time & to post "wrong!" in a few variations - without actually making any attempt to actually debate the issue, but it feels like you wasted a lot of timne there so I've summarized your key points, and, as always when someone dares to disagree with you, they are so on point & insightful.</p>
<p> </p>
<p></p>
<p>Trying to actually lay out an arguement - because I really do think this is an interestring topic,</p>
<p> </p>
<p>Going to the core of the original post there are 2 aspects.</p>
<p> </p>
<p>1) Will an AI get out of control & present a threat to mankind as a whole.</p>
<p> </p>
<p>And my take on that is no. No more than the Stuxnet virus broke out & deleted the internet. We are not going to go from 0-God. In a few years we will have real life AIs testing every possible ethical subroutine imaginable when AI driven cars are choosing between killing a pedestrian & hitting another car. Or AI predator drones are choosing the level of collateral damage acceptable. Ethical coding is already a huge thing in the industry. Things like the paperclip example often cited are not really taken seriously by anyone in the industry because even if you are doing coding 101 you understand that typing A = 1 to infitinity, next A is probably not the core of good code. Its like implying cutting edge AI code will be written by the retarded. One of the great threats to mankind has always been mankinds emotion & irrationality, the idea that MAD wont work with a human who is nuts - say a North Korean, or one with a martyr complex - Islamic terrorists. When people predict AI doom, beyond the idea that 60 or 100 years of coding low level AIs will have taught us nothing, they invariably attribute human flaws to non human AIs. Its the equivilant of going "but what happens when the AI gets its period!!"</p>
<p> </p>
<p>2) even if AIs are a threat, are they the biggest threat we face?</p>
<p> </p>
<p>And again I think its not even close. Climate change & the ensuing wars for water, food, basic survival are already destablising all of Europe are first the Middle East & soon North & then Sub Saharan Africa will follow. Even if you don't belive in climate change, you can believe in 4 million refugees currently trying to get to Germany and the catastrophic social unrest that'll bring. Thats the sort of shit that starts a world war. Far worse in my opinion is the Antibiotic issue - and unlike roguie AI's virtually the whole medical industry has the shits over that. And the inequality issue we face in a few years as millions of jobs are lost to, um.. AIs. Again you want a recipe for global war, young men without jobs or money has always been a great starter.</p>
<p> </p>
<p>And if you want a proven "all life" killer, the earth has history of asteroid strikes wiping out virtually all life. And anything of decent size would easily wipe out mankind. And its not hypothetical or a thought experiment, its happened. So in terms of "we must focus on this as it presents an existential threat!" rogue AI's are way down the list. Ironically the best shot at tracking rogue asteroids would be an AI tasked with doing that, sitting in a space mounted telescope.</p>
<p> </p>
<p>I guess I would think differently if I was a tech mogul who's core company needed top level AIs to work & was losing out to his main competition. Then I'd want a brake put on AIs for sure. Or maybe even better to establish myself as the go to guy to oversee the laws around that. IE I'm not sure I 100% trust Musks motives in everything he does.</p> -
<p>I genuinely do not know what I think the end result will be, but I have concerns about the share broadness of the possible outcomes, and one thing I am 100% convinced of is that the range is incredibly broad. To specify a predicted outcome is fine, if guesswork (like everyone else) but to set a worse case scenario is foolish. </p>
<p> </p>
<p>I took issue with your statement that the very very VERY worst case scenario was AI thinking of us as great apes and flying off into space. That is nonsensical and demonstrably wrong. </p>
<p> </p>
<p>Your position that AI will not get out of control and present a threat is perfectly valid, as like everyone else, it is a guess at the unknown.</p> -
<blockquote class="ipsBlockquote" data-author="reprobate" data-cid="566540" data-time="1458637277">
<div>
<p>it didn't just come up, you brought it up out of nowhere - in your own words:</p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>Yes and?</p>
<p> </p>
<p>A post got me thinking about it, I posted it.. and then decided that it was probably worth its own thread....</p>
<p> </p>
<p>But actually I am not really interested in your views, you have time to look through my posts .. yet could not be bothered actually reading about the topic being discussed . Go back to watching Disney Junior lad.</p> -
<p>A 'close to the topic' link which may provide some interesting scifi reading </p>
<p> </p>
<p><a data-ipb='nomediaparse' href='http://io9.gizmodo.com/the-1946-story-that-predicted-how-destructive-the-inter-1766262905'>http://io9.gizmodo.com/the-1946-story-that-predicted-how-destructive-the-inter-1766262905</a></p> -
<blockquote class="ipsBlockquote" data-author="Crucial" data-cid="566628" data-time="1458679316">
<div>
<p>A 'close to the topic' link which may provide some interesting scifi reading </p>
<p> </p>
<p><a data-ipb='nomediaparse' href='http://io9.gizmodo.com/the-1946-story-that-predicted-how-destructive-the-inter-1766262905'>http://io9.gizmodo.com/the-1946-story-that-predicted-how-destructive-the-inter-1766262905</a></p>
</div>
</blockquote>
<p> </p>
<p>Thanks.</p>
<p>That is actually quite remarkable. Imagine the imagination required to come up with a story like that in 1946!</p> -
<p>It's also odd that it is a sci-fi take on automation that is slightly different to the ones that commonly entered more mainstream movies/TV.</p>
<p>We have often had the 'computers getting all control freak on us' stories, but this one is quite cool in having the well meaning AI being too literal in a very scary sense. I thought the whole 'I notice you are annoyed with your spouse, here is a way you could kill them and get away with it' thing quite funny.</p> -
<blockquote class="ipsBlockquote" data-author="Crucial" data-cid="566646" data-time="1458684632">
<div>
<p>It's also odd that it is a sci-fi take on automation that is slightly different to the ones that commonly entered more mainstream movies/TV.</p>
<p>We have often had the 'computers getting all control freak on us' stories, but this one is quite cool in having the well meaning AI being too literal in a very scary sense. I thought the whole 'I notice you are annoyed with your spouse, here is a way you could kill them and get away with it' thing quite funny.</p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>Good point. That is one of the fears of AI, and he pretty much nailed it way back when.</p> -
<p>If I worked in IT, I'd probably be slightly concerned at the assumption that whatever super-intelligence you guys eventually create is going to have a severe case of Asperger's.</p>
<p> </p>
<p>But here's a thought - before you hit the "Enter key - why don't you run it all past a few of the cool people of the world. </p> -
Hey I'm way cool! All the guys in the chess club tell me so!!!<br><br>
I'd say IT is one of the first places AI is going to outstrip humans in terms of industry. Code writing codee will be a lot more efficient, even if it still needs creative guidance occasionally. -
<blockquote class="ipsBlockquote" data-author="NTA" data-cid="566706" data-time="1458716875">
<div>
<p>Hey I'm way cool! All the guys in the chess club tell me so!!!<br><br>
I'd say IT is one of the first places AI is going to outstrip humans in terms of industry. Code writing codee will be a lot more efficient, even if it still needs creative guidance occasionally.</p>
</div>
</blockquote>
<p>It will be the more mundane repetitive brainless tasks that go first, like Accountants, you dont an need AI to replace them. Sub AI will do it.</p>
<p>Be much harder to replace a Coder than an accountant or engineer or DBA. But eventually if AI comes along all jobs will go.</p> -
<blockquote class="ipsBlockquote" data-author="Chris B." data-cid="566704" data-time="1458716066">
<div>
<p>If I worked in IT, I'd probably be slightly concerned at the assumption that whatever super-intelligence you guys eventually create is going to have a severe case of Asperger's.</p>
<p> </p>
<p>But here's a thought - before you hit the "Enter key - why don't you run it all past a few of the cool people of the world. :)</p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>Who are the cool people?</p> -
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566776" data-time="1458761817"><p>It will be the more mundane repetitive brainless tasks that go first, like Accountants, you dont an need AI to replace them. Sub AI will do it.<br>
Be much harder to replace a Coder than an accountant or engineer or DBA. But eventually if AI comes along all jobs will go.</p></blockquote>
While offshore outsourcing is still cheaper for IT, it will beat AI.<br><br>
Then it will depend on speed of learning. -
<div> </div>
<div>
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566778" data-time="1458761895">
<div>
<p>Who are the cool people?</p>
</div>
</blockquote>
<p> </p>
<p>Me. Henry Winkler. That might be about it. No - probably Samuel L. Jackson as well.</p>
<p> </p>
<p>I also have some slightly more serious thoughts on that article. One is that we are undoubtedly edging closer to being able to create AGI/ASI in the same way that we are edging closer to being able to create actual biological life. But, there's still a quantum leap that needs to be made and it's far from guaranteed that we can make that leap. I'm not really a believer that there are no limits to technology. Some things we can imagine might just not be possible. And who knows how close we are to those limits.</p>
<p> </p>
<p>I guess I'm not quite as sanguine as Gollum about the likelihood of the ASI going beserk, but it doesn't seem very likely. It does smack a bit of the AI having a pretty severe dose of Aspergers and I don't see why it would. Maybe if it decided to use the bible as a template for how God should act, but I'd imagine it would be too smart for that. It's surely going to have assimilated everything we know including all the ethical stuff and I don't really see why it would choose to reject it all and think, "I'm going to kill all the ants,because..."? "I'm going to enslave all the ants, because..."? It's impossible to predict, but aside from watching lots of movies, these don't seem like very likely outcomes.</p>
<p> </p>
<p>I guess, like Dogmeat, I've come quite a distance in a fairly slow time machine and, in some ways, I'm faintly disappointed at our progress. When I was in the primers (Year 1 newbies), we had this "You will go to the moon" book <a data-ipb='nomediaparse' href='http://sagansense.tumblr.com/post/46566070121/you-will-go-to-the-moon-by-mae-and-ira-freeman'>http://sagansense.tumblr.com/post/46566070121/you-will-go-to-the-moon-by-mae-and-ira-freeman</a></p>
<p> </p>
<p>Written in 1959 and, unless you're Neil Armstrong or a handful of others, it's not looking likely. In fact, if you wander around your house there's a lot of stuff that's much smaller, much faster and much more efficient. But not that much stuff that someone from the '60s wouldn't be able to relate to. Almost nothing at all.</p>
<p> </p>
<p>You'd be going, "Hey look this is my mobile phone".</p>
<p> </p>
<p>He'd be going, "Wow, it's a bit like a walkie talkie, but better".</p>
<p> </p>
<p>"No, but it's got a camera in it".</p>
<p> </p>
<p>"Wow, like my Kodak Instamatic".</p>
<p> </p>
<p>"No, no - it's much more powerful. It's got 100 times more power than the computer on the Apollo XI landing module".</p>
<p> </p>
<p>"Wow - it's landed you on the moon?"</p>
<p> </p>
<p>"No", says Buzz Aldrin, stepping in from stage left. "He's done fuck all with it. He uses it to take selfies".</p>
</div> -
<blockquote class="ipsBlockquote" data-author="Chris B." data-cid="566826" data-time="1458773242">
<div>
<p> </p>
<div> </div>
<div>
<p> </p>
<p>Me. Henry Winkler. That might be about it. No - probably Samuel L. Jackson as well.</p>
<p> </p>
<p>I also have some slightly more serious thoughts on that article. One is that we are undoubtedly edging closer to being able to create AGI/ASI in the same way that we are edging closer to being able to create actual biological life. But, there's still a quantum leap that needs to be made and it's far from guaranteed that we can make that leap. I'm not really a believer that there are no limits to technology. Some things we can imagine might just not be possible. And who knows how close we are to those limits.</p>
<p> </p>
<p>I guess I'm not quite as sanguine as Gollum about the likelihood of the ASI going beserk, but it doesn't seem very likely. It does smack a bit of the AI having a pretty severe dose of Aspergers and I don't see why it would. Maybe if it decided to use the bible as a template for how God should act, but I'd imagine it would be too smart for that. It's surely going to have assimilated everything we know including all the ethical stuff and I don't really see why it would choose to reject it all and think, "I'm going to kill all the ants,because..."? "I'm going to enslave all the ants, because..."? It's impossible to predict, but aside from watching lots of movies, these don't seem like very likely outcomes.</p>
<p> </p>
<p>I guess, like Dogmeat, I've come quite a distance in a fairly slow time machine and, in some ways, I'm faintly disappointed at our progress. When I was in the primers (Year 1 newbies), we had this "You will go to the moon" book <a data-ipb='nomediaparse' href='http://sagansense.tumblr.com/post/46566070121/you-will-go-to-the-moon-by-mae-and-ira-freeman'>http://sagansense.tumblr.com/post/46566070121/you-will-go-to-the-moon-by-mae-and-ira-freeman</a></p>
<p> </p>
<p>Written in 1959 and, unless you're Neil Armstrong or a handful of others, it's not looking likely. In fact, if you wander around your house there's a lot of stuff that's much smaller, much faster and much more efficient. But not that much stuff that someone from the '60s wouldn't be able to relate to. Almost nothing at all.</p>
<p> </p>
<p>You'd be going, "Hey look this is my mobile phone".</p>
<p> </p>
<p>He'd be going, "Wow, it's a bit like a walkie talkie, but better".</p>
<p> </p>
<p>"No, but it's got a camera in it".</p>
<p> </p>
<p>"Wow, like my Kodak Instamatic".</p>
<p> </p>
<p>"No, no - it's much more powerful. It's got 100 times more power than the computer on the Apollo XI landing module".</p>
<p> </p>
<p>"Wow - it's landed you on the moon?"</p>
<p> </p>
<p>"No", says Buzz Aldrin, stepping in from stage left. "He's done fuck all with it. He uses it to take selfies".</p>
</div>
<p> </p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>I think you are making the mistake of assuming that emotions or other such 'special' personality traits are relevant. they really are not. Forget some soprt of mystical barrier of 'soul' or emotion, it is irrelevant. They are not trying to create an artificial human brain. they are trying to create an artificial intelligence. It is a mistake a LOT of people make when thinking of AI. </p>
<p> </p>
<p>I have yet to read anyone who says that AI would destroy man because of any human emotion (although that depends on how to define a human emotion) . I blame sci fi films who portray AI in a 'humanistic' fashion. At its crux AI is about self teaching,not about learning how to have emotions.</p>
<p> </p>
<p>The second part of the article does a far better job of explaining it.</p>
<p> </p>
<p><a data-ipb='nomediaparse' href='http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html'>http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html</a></p>
<p> </p>
<p>I completely reject your premise of slow progress, advancements since 1960 are incredible. Medicine, internet, communications.. all hugely advanced technology. As for you going to the moon. That is an economic issue not a technology issue. Unless you are saying you dont think Earth has the technology for moon tourism? Why would a business start to do that? Although in regards to advancements I do subscribe to the theory that we move in an s shape rather than a straight line.</p>
<p> </p>
<p>I think we are a heck of a lot a lot closer to creating AI (not human brains) than we are actual biological life. Although my knowledge in the field of creating biological life is limited. Do you any links to show the progress made?</p> -
<p>Perhaps - but, </p>
<p> </p>
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566847" data-time="1458777001">
<div>
<p>I think you are making the mistake of assuming that emotions or other such 'special' personality traits are relevant. they really are not. Forget some soprt of mystical barrier of 'soul' or emotion, it is irrelevant. They are not trying to create an artificial human brain. they are trying to create an artificial intelligence. It is a mistake a LOT of people make when thinking of AI. </p>
</div>
</blockquote>
<p> </p>
<p><span style="font-size:12px;">Perhaps, but I don't think so. I didn't find some of the author's analysis particularly convincing. e.g.</span></p>
<p> </p>
<p><span style="font-size:12px;"><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">"So we’ve established that without very specific programming, an ASI system will be both amoral and obsessed with fulfilling its original programmed goal."</span></span></p>
<p> </p>
<p><span style="font-size:12px;"><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">We haven't really established any such thing. We simply don't know what would happen. </span></span></p>
<p> </p>
<p><span style="font-size:12px;"><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">I didn't find his Turry analogy particularly convincing, because it seems much more like an Artificial Narrow Intelligence that's got out of control than an Artificial Super Intelligence that is vast dimensions more intelligent than us. An ASI that's still trapped in a programmed box we made for it of making little handwritten notes? I'd think it's much more likely that it's going to be able to re-programme itself to do whatever it wants. And that's entirely unpredictable, but eventually presumably will encompass anything and everything that is possible. Seems like a more logical endpoint. </span></span></p>
<p> </p>
<p><span style="font-size:12px;"><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">Much like a nest of ants, we might get wiped out along the way, but we might not as well. I'd tend to think we would just be a bit irrelevant to whatever purpose the ASI would develop for itself.</span></span></p> -
<blockquote class="ipsBlockquote" data-author="Chris B." data-cid="566861" data-time="1458781305">
<div>
<p>Perhaps - but, </p>
<p> </p>
<p> </p>
<p>Perhaps, but I don't think so. I didn't find some of the author's analysis particularly convincing. e.g.</p>
<p> </p>
<p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">"So we’ve established that without very specific programming, an ASI system will be both amoral and obsessed with fulfilling its original programmed goal."</span></p>
<p> </p>
<p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">We haven't really established any such thing. We simply don't know what would happen. </span></p>
<p> </p>
<p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">I didn't find his Turry analogy particularly convincing, because it seems much more like an Artificial Narrow Intelligence that's got out of control than an Artificial Super Intelligence that is vast dimensions more intelligent than us. An ASI that's still trapped in a programmed box we made for it of making little handwritten notes? I'd think it's much more likely that it's going to be able to re-programme itself to do whatever it wants. And that's entirely unpredictable, but eventually presumably will encompass anything and everything that is possible. Seems like a more logical endpoint. </span></p>
<p> </p>
<p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">Much like a nest of ants, we might get wiped out along the way, but we might not as well. I'd tend to think we would just be a bit irrelevant to whatever purpose the ASI would develop for itself.</span></p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>You are quite correct we dont know. However I am unaware of any serious research or advancement that does not involve the AI being amoral... or amoral as far as self determination. So I dont think the authors conclusion unreasonable. In fact I think it is far more of a stretch to project a moral compass on an AI. You still seem to be basing your understanding on your own definition of what AI is. According to the research track that is currently progressing and the end goal of the AI research, then the authors conclusion is valid. What you seem to be describing is not really AI, but something else entirely, and therefore your conclusion is accurate.. what you are describing would be very difficult to imagine being created given where we currently stand.</p>
<p> </p>
<p>Indeed if you are talking about morality .. then there is a strong argument you are no longer talking about AI, but something else entirely.</p> -
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566862" data-time="1458781926">
<div>
<p>You are quite correct we dont know. However I am unaware of any serious research or advancement that does not involve the AI being amoral... or amoral as far as self determination. So I dont think the authors conclusion unreasonable. In fact I think it is far more of a stretch to project a moral compass on an AI. You still seem to be basing your understanding on your own definition of what AI is. According to the research track that is currently progressing and the end goal of the AI research, then the authors conclusion is valid. What you seem to be describing is not really AI, but something else entirely, and therefore your conclusion is accurate.. what you are describing would be very difficult to imagine being created given where we currently stand.</p>
<p> </p>
<p>Indeed if you are talking about morality .. then there is a strong argument you are no longer talking about AI, but something else entirely.</p>
</div>
</blockquote>
<p> </p>
<p>I think any outcome is possible. But, if you assume that one of the first things you assume the super-intelligence would do is to assimilate all of human learning, then that's going to include all sorts of ethical and moral works and ideas, as well.</p>
<p> </p>
<p>Who's to say whether or not it would regard these as relevant or irrelevant. Even if programmed to regard them as relevant, if it's able to move as far up the ladder of intelligence away from us as depicted then it's likely going to be able to override anything we try to build into it.</p>
<p> </p>
<p>Is it possible to be that intelligent, but not to consider moral questions?</p> -
<blockquote class="ipsBlockquote" data-author="Chris B." data-cid="566866" data-time="1458783223">
<div>
<p> </p>
<p> </p>
<p>Is it possible to be that intelligent, but not to consider moral questions?</p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>Not only do I think is possible it is in my (and most researchers into AI) opinion.. very VERY likely.</p>
<p> </p>
<p>Well I guess it could consider moral questions, not make decisions based on human morality. It would be an abstract term. If it gets so intelligent up the ladder from us... then why would it take a humanistic view of morality? Anymore than we look at the ethical code of ants?</p> -
<div> </div>
<div>
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566872" data-time="1458784045">
<div>
<p>Not only do I think is possible it is in my (and most researchers into AI) opinion.. very VERY likely.</p>
<p> </p>
<p>Well I guess it could consider moral questions, not make decisions based on human morality. It would be an abstract term. If it gets so intelligent up the ladder from us... then why would it take a humanistic view of morality? Anymore than we look at the ethical code of ants?</p>
</div>
</blockquote>
<p> </p>
<p>I'm not sure whether the first is necessarily a good assumption and it will likely make a significant difference in outcomes.</p>
<p> </p>
<p>In the second, I largely agree - one major difference to the ants is that at least the ASI will be able to read our codes of ethics and decide which bits - if any - might be relevant to it. </p>
<p> </p>
<p>On the whole, Henry, Sam and I agree that it would be good to try to interest the ASI in ethics. </p>
</div>