Ok, first off, I have no direct assmebly related reason for posting this thread. But, I do however feel this forum is a very deep rooted and knowledgable comunity of experience and inovative thought. For this reason alown, im *sure* at least one if not every user here who would be interested or stand to gain from an extremely thought provoking artical i just read. The artical: Why the future doesn't need us.
Our most powerful 21st-century technologies - robotics, genetic engineering, and nanotech - are threatening to make humans an endangered species.
If your thinking by now, "why does a programmer, an ASM programmer care about this?". Then it should also provoke your curriosity learn who the author is:
Bill Joy, cofounder and Chief Scientist of Sun Microsystems, was cochair of the presidential commission on the future of IT research, and is coauthor of The Java Language Specification. His work on the Jini pervasive computing technology was featured inWired 6.08.
It indirectly points out the reality of movie scenarios like T2, 12 Monkeys, Outbreak, Matrix, and alike. Just as you read this post, sci-fi is moving towards reality.... the question is, for better or worse? Its a modest read, but rewarding all the same.. im personally uncertain about all the conclusions at this point (as i just read it and need to sleep on it :) ), but i can say it left me with a strong enough impact to pass it along for others to 'taste' if you will. Assuming, Hiro doesnt mind the mild missuse if this thread, i would be interested in others oppinions/conversation on this topic. NaN
Posted on 2001-03-27 23:18:00 by NaN
NaN, i've read the article... due to my bad english i haven't understood every word but.. yes, it really seems possible only in a movie or in a animated cartoon! In any case, it will depends on what human will teach to machines ...
Posted on 2001-03-28 04:18:00 by angelo
hmm.. people have the tendency to think in the future as a present ideia taken to the limits. the future is undefined. yes, that could happen, also as other thousand ideias you or anyone else might have. when building something we tend to have control over it, and if the future will be artificial intelligence i think pretty much that humman beings will control it. there are six billions of people on this planet, if someone is trying to do something hard to control there's always someone else already cracking it. sweet dreeams :D This message was edited by ensein, on 3/28/2001 6:21:46 AM
Posted on 2001-03-28 05:19:00 by ensein
Heres my opinions... if anyone cares to listen. 1) Machiens will never be conscious, they may reach the stage of complexity that makes them look conscious, but they will never actually be. Of course if they act conscious then does it really matter if they are or not? Well to be honest, no! 2) The risk is not weither or not machines will be left to make their own decision, many already do. The risk is letting a machine decide its own goals. Let me explain. If a machine is programmed to do everything in its power to make itself happy, you may think that posses a risk, however your forgetting that the programmer decides what makes a machine happy. Program a machine to be happy when humans are happy, and sad when those around it are sad and it will do everything in its power to make those people happy. Good? I think so. However if a machine can decide for itself what makes it happy then it may just decide that it likes the look of human insides. Of course this need not be a problem, any machine that complicated would have to have an overide built into it, for example an radio signal could tell it to shut down, and if you program it to believe that this is a good thing, well then it would not try to prevent this happening. This message was edited by Zadkiel, on 3/28/2001 3:01:53 PM
Posted on 2001-03-28 14:00:00 by Zadkiel
Companies seem to find value in replacing human labor with mechanical labor. Why? Well, machines traditionally possess better precision and immunity to boardem. We can work them like slaves and they'll produce. When they break down, they don't deman disability or workers comp and file lawsuits for millions because a hydrolic pump went bad. Send a repairman out for $95 USD/hr. and it's up and running after hours to weeks. Much cheaper than a human and medical insurance and taxes and all else. If big companies have their way, robotics will be complicated enough to in some ways simulate a thought process and define itself goals based on circumstance. For example, a company has a target project of $5 billion USD this year. To do so, we need to produce this much product X and product Y. This means we needs these machine at this production level. Are monthly expenses are these, so if we produce more during these hours, the electricity bill will be less because we're producing off peak hours. That causes us to save this much money... etc etc etc. When sales aren't met, the computers can try to find other ways of satisfying the criteria. Well, there goes the humans -- why didn't they think of that? Ever try to call a company and talk to a human? How many numbers do you have to push before you finally get somewhere? How many automated responses or email responses? To a point, sophistication will be such that they can immitate human conscience. It may not come from scientific research, most likely from the gaming community to make games more realistic and possibly competitive, but it'll come. _Shawn
Posted on 2001-03-28 15:28:00 by _Shawn
First to Zadkiel: How can you truly persuade me that YOU exhibit consciousness, and not just an amazing simulation. Is there a ghost in your machine, are you really in there? Then ask yourself: is your dog conscious? No? Not even as little? Could a 'mere machine' become conscious? I'm sure it can. On a physical level, if you build a machine out of silicon, or you build it out of carbon, it's still a machine. Given one example of a conscious machine (i.e., yourself), you have proof a machine may become conscious, therefor it's an easy inference to allow other machines to have the same capability. As far as programming in a slave mentality (and it seems like there are a lot of people here too young to have read Azimov and the Three Laws of Robotics), suffice it to say that any complex program cannot be fully understood by any one person. I have in mind one very familiar program: Windows itself. Go read why Microsoft does not, and will not, publish a list of possible error messages possible from each API call. The short answer is "we don't know every possible message." Are we close to creating a self-aware machine? If you want a great expose on the same subject, go read any books by Douglas Hoffstatler. "Godel, Esher, Bach" to begin with. Go read em anyway, they are fascinating books. Why else have a mathematician, a print artist, and a musician together in the title. Douglas actually WORKS trying to produce AI (not just the latest Microsoft killer). He's a university prof (one of the good ones), he knows his stuff. He bases any claim on AI on the following question: Can it recognize the letter "A" ? Think about that simple thing any 4-year-old can do. And think of all the different letter 'A' fonts there are. Bold, italic, serif, sans serif, special fonts that explode like firecrackers, letters made of naked bodies intertwined... endless fonts. Not to mention handwriting. Yet a 4-year-old has no trouble with any of them. And how do you recognize your own mother? Do you flip through a mental folder of every woman you know until the visual image matches what your eyes tell you? Os does she just posses a certain 'mom-ness' unlike every other woman on the planet? Until 'mere code' can 'understand' (embody is more like it) the concept of 'mom-ness,' AI will not be realized. I am sure this is possible, just not in the current manor of programming. You will never be able to set the compiler flag to /SA for self-awareness. Eventually, a machine will be realized that is more then the sum of it's components, that does things outside the box of it's code. Will it turn on it's creators? Beats me.
Posted on 2001-03-28 20:34:00 by Ernie
I hate to bring it to you guys but there is a very simple way to produce self connsciouse machines: Neural Networks, maybe you are not aware but any NN will evantually evolve into a self consciouse entity...given the time and enough neurons. It will easy understand and deal with things like "letter A in a billion of sizes/fonts" etc One advantage that the military are longing to use is the bare fact that once you have a NN inteligent enough it can be multiplicated like any programm (ASM or not :) today speed of leraning and basic concepts of neurons layout/mathematics are not fully understod ... so this makes them look like not evolving...but i bet military projects are allready training hundreds of them to become (maybe) our replacers :) mankind will them become obsolete :( i know for a fsct what such a NN can do because my cousin has dome one that can easy read ANY hand writting and make it an ASCII file (OCR stuff) i have also played with them a lot... Sometimes i hold myself from the ideea of joining the computational speed of ASM and the Neural Networks... (i oscilate about geting one NN in my RTS game AI for this reason...well and time needed to train :D ) NN sure have the potential/capabilities to become another species and if made true robots it will be able to eliminete mankind...if they so whish :) Bottom line we are also some kind of robots driven by a NN... our only salvation...can be something like a soul....but i guess is just a fairy tale :)
Posted on 2001-03-28 21:38:00 by BogdanOntanu
BogdanOntanu, If it's such "a very simple way to produce self connsciouse machines," can you do this and hand me one? I've heard a lot of the potential of NN. I've never seen a single product based on them. That's why I specifically did not let myself type those words. But that's the trend of "how" I believe AI will come about, if not nerral networks, some other construct that does not execute instructions from memory, but some other pattern processing device.
Posted on 2001-03-28 22:10:00 by Ernie
I have to side a bit with Ernie and BogdanOntanu after a day thinking on it: I do believe that an amazing level of intelegence can and will be achieved by machines. Ernie pretty well hit the nail on the head there, no need to reitterate. As well i know of some amzing accomplishments already being done by neural networks at my university's grad program. They a have been developing application specific neural networks as a tool for achieving fast and in some cases more accurate solutions to Maxwell's equations in RF design (based on complex geometries that standard calculus can not solve). After learning and dealing with simple geometries of maxwell's equations, i find this amazing already. Agreed, this case is specific, a non-sentient example. But who is to say, someone cant perfect this method, create a 100 more sparate networks for other needed features of personality, and while were at it, sprinkle in a few more neural networks to manage them all.. and boom.. you got a baby running on a 9V battery (i realize this a very simple understatement, but one day, it will happen). As well, if robots ever become as consious as we are, we will never know if they are beyond us, only that they seem to be our equal. I say this because assuming they do go beyond our level of consiousness intellect, and if we are able to realize this, then on principal alone they have not yet surpassed us, since we still are able to realize this! Is a pardox that leaves us in the shadows, put all the fail-safes in a system to ensure all the safety you want, but if we can't realize a machine is 'puppeting' us, then only people like the Uni-bomber would rationally push the big red Emergency Shut-off button..... Realize is a good word on examination, to bring to reality. If we can't bring to our reality what a machine sees as reality, we are staring down the barrel of a loaded gun, the question is, will the machine decide that it does like the looks of human insides? Moving down a notch, i also would like to point out that consiousness and varying levels of intellect asside, there is also a very believable threat from what Bill Joy terms self-replication. This is my largest understood fear from technology. He points out that if the only 'intellect' a machine posesses is to find and build another machine like itself, PERIOD. Then that is a far reaching problem. One that could go out of control exponetially, while resources permit. A 4 foot robot would be easy to detain, and confine from resources if we wish to stop it. But if the robot is a few nano-meters long, the problem is near impossible to control, and resources are in extreme abundance at the nano-level (how many nano-meter is the perimiter of a pop-can tab?? Large?, now whats the surface area of that same pop-can tab??). While its function is of no threat at all, the metalic ash of stupid replicating robots that would eventually be a threat to computer systems, water treament filters, and numerous other things beyond my imagination. This is what i think Mr. Joy's real point is, and as he pointed out nueclar and biological threats were developed and restricted by military. Nano, and Bio-Technology is being developed by the commerial industry (with far less restrictions). Only one mistake could set the problem out of the box. If you still think that none of this could or would be permitted to happen by virute of man's own doing, then i site Murphy's law, or well, as Mr. Joy explained it:
Murphy's law - "Anything that can go wrong, will." (Actually, this is Finagle's law, which in itself shows that Finagle was right.)
NaN This message was edited by NaN, on 3/29/2001 1:14:33 AM
Posted on 2001-03-28 23:18:00 by NaN
Ok first, let me say that this is the first tread of this type which I've read on a message board that actually contained intelligent posts, that says something about the type of comunity we have here, I'm proud, sorry honoured to be a part of it. Now as for Neural networks, I agree with what was said, they will reach a level where by they can equal or surpass the abilitys of humans, but spiritually speaking I don't believe they will ever be conscious. Of course that not important. They will seem and act as if they are so a mere technicallity such as conscious won't really matter. Even Alan Turing said to discuss that aspect of computers is silly. Now for more important things, Ernies Challange- "How can you truly persuade me that YOU exhibit consciousness, and not just an amazing simulation. Is there a ghost in your machine, are you really in there?" Well, here my arguement, first it full of fallicies i.e. it discounts alot of things as irrevelant. I believe these things to be irrevelant but if you don't then I've fail to convince you. Also unless you agree with the concadind ad infitium arguemant then I won't be able to convince you it I exist either. Otherwise enjoy. PS this is part of "A Meaning Of Life" document that I worte and the ending isn't included, its not revelant to this discussion. "What is the meaning of Life?" Starters One important thing to do here is first answer the question- "Is there a meaning to life?" This is very often overlooked. Lets evaluate the facts, you sit there staring at a screen, you know one thing and one thing only: You exist. This is more profound than you may think. You don't actually know if anyone else exists. To use a sci-fi analogy you could be living your life on a Star Trek holodeck or a Matrix style virtual reality, admittedly its unlikely but you can't prove its not possible, so therefore maybe it is. If either of those situations where true then on one else need exist, they could all be computer generated or indeed an infinite number of other possibilities. That's the good news, after all if that's true then clearly your living in a constructed world therefore clearly it has a purpose, unfortunately any attempt to hazard a guess at its purpose would be immature. We therefore ignore that as a possibility, not because its impossible just because further analysis in that direction would be unproductive. That means we're back to the universally accepted view of reality We all exist. And what a wonderful existence we live. An interesting pseudo proof occurs here. If you are reading this now then someone i.e. me had to write this. We have therefore two possibilities: One. I also exist therefore at least two people exist and it follows logically that we all do. Two. It is also possible that I am the creator of this world or universe and wrote this to put aside your fears that you may be alone, if you never had those fears then this would probably incite them and assuming I'm an omnipitant creator it is logically impossible that I would make such a mistake, therefore again we're back to the usual everyone exists scenario. Of course if you had had those fears then this is the best way to convince you to ignore them, or at least it was up until I made that statement. This style of paradoxical argument could continue ad infinitum. This is in my opinion, complete an incontrevertable evidence to the existance of me or by logic us. So were back to our world full of people and we want to know if there's a meaning to it all. The way I see it is this, You sit there, one person, one mind. This mind is contemplating its very existence, but it is still just one of over seven billion. You actually mean sweet FA, you are less than nothing and yet you have been granted consciousness. There has to be a reason. Seven billion conscious minds are not here by coincidence, the existence of even one conscious mind denies that fact. Comments welcome, indeed, they're needed. This message was edited by Zadkiel, on 3/29/2001 2:27:44 PM
Posted on 2001-03-29 13:23:00 by Zadkiel
Zadkiel, First, let me apologize if you took my statements as a challenge to prove your sentience to me. I was being facetious in my own way, postulating a rhetorical question to get at the *process* of declaring self-awareness. It's the ultimate Turning test. You had issued a blanket denial of any true machine self-awareness ("Machiens will never be conscious") I could not let pass unchallenged. Perhaps if one discovers a test to prove self-awareness in the guy sitting next to them, such a test could be applied to the Coke machine down the hall (the one on the IDSN line collecting buyer info and collating a database of all consumers, not just a quarter detector). If the Coke machine passes such a test, it must be declared sentient. Weather this make it eligible to vote, hold political office, or even petition a court for redress of grievance when it's owner intends to turn it off without permission are points I dare not discuss (though in most of these issues I would lean towards the machine's viewpoint). Nan, Yes, I'm sure the Maxwell's equation net is an impressive bit of work. However, it's nowhere on the path to artificial intelligence. It seems to, under a syllogism like so: I found electromagnetics a hard subject. This computer solves those equations almost before you are done typing them. Far faster then I ever could. Therefore, this computer is smarted then me. False, false, false. Sure, in its very limited domain it is smart. But that's an artificial domain. Even sans-neural nets, programs that could generate their own proof of basic geometry have been exhibited. (What's interesting is some of their proof are nothing like what 'humans' create as solutions). But just try giving this net anything outside of it's domain. It would never 'think' of closing the window if it got wet in a rainstorm. Nor would it recognize the face of it's inventor, the person it spends the most time with. A four year old? Heck, my DOG is smarter then this. I remember some huge database of facts being input into one program. Tons of things everyone takes for granted, because computers need everything explicitly laid out. At night, when no one was inputting data, it was allowed to trace it's data base and form conclusions from this data. It would say, randomly take the list of all people in the base, collate facts by one against another. And it constantly came up with terrible inferences, like "most of the people in my database are famous people, therefor, most people are famous." And not for 15 minutes either! Give me a machine that can solve the problem that no longer perplex a 4 year old and you're on your way to true artificial intelegence. (I just love this topic!!!)
Posted on 2001-03-29 17:26:00 by Ernie
First NN are keept secret, and you will not find many ot them floating arround :) I told you my cousin did a commercial program (kinda OCR) that learned and recognized the handwritteing of anybody, however i lost contact with him (first learned about NN from him) and i also dont hink he will be willing to give that program for free.. :) I could make a NN (using ASM speed maybe it will perform decent) but then i will probabily end the world as we know it....:D So i think first i want to end this game and get some money from it and some babes (blondes) to foul arround with (at my age=35 this matteres a lot :D ) ...then.... well i might think to create a diffrent inteligent species...yeah why not :)...kill the world why not? Nope...sorry but i think i will not create the NN Nuke and then complain about i did not realize what i was doing...or other ppl give it a miss use.... I DO REALIZE and I THANK UNIVERSE that other ppl are not capable to do it... yet (but i am pretty sure they are trying right now as we speak) actually who told you that this can be done... i was never here, we never discussed this subject,never....
Posted on 2001-03-29 20:06:00 by BogdanOntanu
Sorry all, im not ignoring my own thread.. (school has be by the ****'s, so its hard to get free time these days). Ernie, I still disagree with you... your implying my 9V baby would automatically know how to close windows.. No.. i never said it would... but ironically, you actually touched base with how i view things:

    "Sure, in its very limited domain it is smart. But that's an artificial domain."
Im 99% sure IF all this comes to reality, the multi-NN machine's reality will be in an artificail domain, definitly not the one we precieve. Its domain consists of decmals, thousands of decmal places, all working together to encrypt thought and learned transfer functions, where the limits of its reality is defined by the number of decmal places it can physically support and leaning experience... Given proper hardware, I believe this AI domain will re-teach itself over and over trillions of times untill it has become 'smart' in its own unique way, slowly expanding its boundries from nothing more than a few 'seed' thoughs initailly programmed by us, to eventually it own version of reality, and as you put it, its own way of seeing geometry. The impresiveness of all this is that the origional 'seed' logic will be erroded and modified from our first 'instructions' to its own version of the same information, a trillion times modified. We humans do this as well, I learn a lecture at school, and come test time, i dont write down every word said in exact relpication of what was taught, but i do write down the information as i understood it (come test time, i remember very little of the exact order of the information i was origionally taught). It may ultimately be able to interface with us (when its domain expands to include an understanding us), much like how your DOG would beg you for food while your still sleeping in bed. It mostlikely will 'know' its makers face, if i give it such information to learn from. However, if your NN machine is kept in a box, then sure, it would never realize WHAT a window is or why it should be closed.. but give it a multitude of example to learn from, then it may conclude in its own unique way to 'close the window if its raining'... I will agree with you on one point though, it will never have a consious reality as we humans see things... But then again, this is not a biological machine with a finite number of brain cells, its an electronic machine with a finite number of networked nodes. I have a question for you... a fly or a humming bird will see our light bulbs turn on and off 60 times a second (at least in North America), no problem. They mentally have a higher sample rate.. are you implying that they are not consious, because their mental engine works differently from ours? Or a Deer that doesn't move out of the way of an oncomming car, presumably blinded by the headlights, because it doesnt exhibit the behaviour of an rationally intellegent person, is it also not consious? (I enjoy this too :) ) NaN
Posted on 2001-03-30 23:07:00 by NaN
guys, if you're interested in what i'm about to say, please read on. if you're not, please explain how you got that conclusion when all you've read so far is only up to this point--? er, question mark. or a baby lilliputian's ear with the hole below the lobe. (at this point i decide to delete that last statement. i don't want the Almighty Programmer to think i'm stupid and put me through that debugger again. a 9/10 chance before doing that, he'll dump me. He's so stupid He does that again and again, trying to grasp my logic, as if He ever could..., and dump me again.) at age 11 i was already reading "4 philosophies and their practice". while a high school freshman at 12 i was labeled a "communist" (boy, was i proud) and sent to the principal's office twice for literally preaching dialectic materialism and the evils of elitism to my peers. "that's for juniors to talk about during social studies class. and return library books on time," i can still hear her say. it broke my heart. but when i think of it now i should have been very glad. no peer was a bit interested but i got all those reactions from the teachers, not to mention the one who once took me out for a talk over some soda after school. i was so sure i was the chosen motivator of change in our society. then the girl classmates started to look really awesome and i had to kiss hegel and marx on the cheeks. it wasn't cool to stay in the library for hours over a 500- or so page book. 4 years later, while learning to play guitar i stumbled over werner pöhlert's "basic harmony", teaching the unity of material and movement. wow! dialectic materialism in music! it became my new religion and i achieved in 2 years what others did in 5 in as far as guitar playing is concerned. i knew there was a more practical use for all these! well, what else is left to say except that i'm one hell of a complicated machine. me and my kind, all 7 billion of us complicated machines. but not enough to come up with a concrete definition of our nature and existence. but then again, we're now on the verge of perfecting the sytem of self-replication sans copulation. and to top it all, we can see the dawn of actually becoming creators and instantiators to a new class of equally (or more, time will tell) complicated machines. unfortunately, some of us fear that our creations might one day find us obsolete and wipe us out (though some believe our creations will inheret our conscience and morality and will be kind enough to keep us as pets). but for me, this is the best news ever. we will become Almighty Programmers too! speaking of Whom, i'm a firm believer of the Almighty Programmer. but i have to admit that at times i get so upset and feel like i really want to come up with a way to destroy Him (or Them, if They're a project team or a corporation). if i did manage to do that, only then will i worry about my creations destroying me. ; not so scary, but a story anyway... ; i've labeled myself a "beginner" so i ignored all threads outside ; that category. less than an hour ago this thread was not real. had to slap ; myself once to convince me it is, and again for missing all the fun. ; reading all these posts made me once again examine the electrical impulses ; interpreted by my brain, and to go back to the rhetoric forest to check if ; the fallen tree is still there. if you know andrew and helen o' loy, i say ; i hope i live long enough to one day see them roaming our streets; to see ; the epitome of synthesized intellect-will-emotion bring in the groceries. ; unless, of course, if their engineers decide to make them a few nanometers ; tall. ; unless i'll find myself wake up bald and thin and submerged in sticky fluid ; in a pod, i'm quite excited to see how nanotechnology, robotics, genetic ; engineering, and the likes will turn out, judging from what little i've read.
Posted on 2001-04-05 04:18:00 by pixelwise
For me, I don't even see it as a question of if, but one of when. So, then I ask myself what are the minimum reactions that will spawn consciousness? Computer processing is still fairly linear - sure NNs and other methods fold up the linear space to give a different impression, but that is at a huge cost in time! I think it will take another type of computer to gain consciousness, and a different type of programming. Not that it's impossible with current computers, but it'd take a great cost in time and expontially more processing power than the human brain - assuming that we have to recognize the consciousness by similarity to our own. {this is a great subject for this group, and a very interesting thread to read}
Posted on 2001-04-05 12:17:00 by bitRAKE
I agree with you that the cost of R&D will be extreme - most likely out of the realm of govermental budget. (Even thought the Clinton Admin. has already poured $497 Million in Nanotechnology research) REFERENCE. Its interesting to note that like everything else, nanotechnolgy is also an imperfect technolgy. Dont get me wrong, the very nature of how they are built, exactly alligning atoms to form machines, almosts defines perfection. But due to their size, they become fragile. An article i read by Eric Drexler "Engines Of Creation", pointed out that "Radiation will still cause damage, though, because a cosmic ray can unexpectedly knock atoms loose from anything. In a small enough component (even in a modern computer memory device), a single particle of radiation can cause a failure". Bill Joy didnt specificly discuss this point, but he did warn that if something like a cosmic ray was to alter the nano-machine, its new and undesired function could possibly reak havok on society -- and uncontrollably (if allowed to self replicate, based its newly faulted construction ). (( My added thoughts, ~ Glad to see this thread is still alive )) :D NaN
Posted on 2001-04-06 00:25:00 by NaN
Nota Bene =========== NN do not have to be simulated on linear logic computers, they can be easyly made from Operational Amplifiers, mixing milions of neurons on a single chip... That NN is so fast...so fast...so small :D simulations on computers are just for testing...real NN are done with some kind of OA.. This message was edited by bogdanontanu, on 4/6/2001 9:01:08 PM
Posted on 2001-04-06 20:57:00 by BogdanOntanu
Very, true... And to continue that thought, as well with the discussions thus far... NN's, once learned, can save the 'Matrix' to a data file (simply d/l the numeric values associated between each node). This means, while new humans beings must learn everything else knows over years, the NN machine can essentially upload to another NN machine, effectively cloning itself and all it has learned in a few seconds (starting to sound alot like the 6th day stuff (Arnold Movie :) ) NaN
Posted on 2001-04-07 01:11:00 by NaN
Yeah...true again...thats why i dont want to make NN's ... i just dont want to end our species (temporar impresion of) domination over this planet :D
Posted on 2001-04-07 15:34:00 by BogdanOntanu