I have been thinking, about future of organic species as a whole. This is what I thought of.
Humans have been on planet earth for at-least 200,000 years. In this time, we killed at-least 1bn+ people as a result of war or disease. We also achieved a lot. But we can die so easily. Raise Temperature of earth by 2 degrees and bang humans start suffering and start dying. Increase pressure slightly we start getting crushed.
Expose humans to radiation we start dying and mutating.
However....
Modern Robots, Computers have been on planet earn for 50-40 years roughly* and the computers\robots already posses more strength than humans and are much more faster (but not intelligent). Computer\robots can solve equations much faster than humans. Robots\Computers are better at analysis and can do work in a second but which would take a human a entire day to do.
Robots can survive extremes of pressure, Radiation, Temperature which would kill human instantly. Robots are much more resilient to chemicals than humans. Robots are not infected by diseases. Robots think collectively making them much more efficient than any organic life.
________
Once Robots gain cognitive learning and better collective knowledge, I suggest humans be out of the scene. Robot once developed properly could be much more effective than humans.
Next, robots can never destruct themselves as they follow a set of instruction rather than being outright rebellious.
Robots base everything on probability making them effective in every perspective of universe.
I think it will be very tricky in the coming centuries to build a truly artificial life superior to our ourselves and that the short term way forward is probably along the lines of gene manipulation and cyborg technology. ( https://www.youtube.com/watch?v=AyenRCJ_4Ww )
However, pure synthetic life is very interesting, although probably very far off for now (at least, if you want to have some kind of intelligence). It would be especially interesting in many centuries or millennia, to see how far quantum computing technology can go and if that can produce life with intelligence far beyond what we currently have, while existing in a state which is far more survivable. This is probably inevitable if we manage to not wipe ourselves out in the mean time.
Robots are not infected by diseases.
Robots are affected by viruses. Computer viruses could be the robot's versions of disease...
Robots think collectively making them much more efficient than any organic life.
Now, you see viruses are not the life the can be found in universe, they have to be created this makes them extremely rare and even if they are we can make AV softwares.
Ants are on earth for 100,000,000 years yet a single human with 2 wood sticks can burn a entire ant colony and ants can be killed easily, cannot think like computer or humans.
We are already building good AI using silicon chips but imagine using quantum computers we could improve these much more.
I think you'll find, that in terms of raw speed and processing of information, the human brain by far surpasses any of the fastest supercomputers in the world. The reason why it appears that computers are faster is mainly (as I see it) down to two reasons: More focused, and not having to process secondary data. If a robot had to process all the data that a human did (including things like collating the data from the senses, sending signals subconsciously to various parts of the body, processing the surrounding environment in detail for possible threats, etc.), would it have much left over to do the kind of processing that a human brain can do? And that is even with the average human only using up to about 10% of their brain at any time.
My point here is, that unless some serious developments in computers get underway, the human brain will be superior for a long time. Modern CPU's can't really get any more efficient than they are currently, due to physical limitations on sizes (and quantum problems with electrical currents such as electron tunneling). Quantum computers will require a large amount of work, too: To prevent the quantum state from randomly shifting, they need to be kept at temperatures approaching absolute 0, and even then due to the probability factors in related calculations have a much higher error rate than the human brain. Overall, even if quantum computing technology is significantly improved over the next few years, it will still not be in any level comparable with the human brain, being too large, unwieldy and error-prone.
Also, on the topic of viruses, if the AI for computers got high enough, wouldn't errors within the processing of the machines be like a disease? For example, this could either be inherent flaws in the actual programming for the robot itself or a problem that is built up through the process of learning and changing its own structure.
The whole "robot apocalypse" thing is nothing more than science fiction. Robots will never do anything we don't design them to. We won't create a robot that is sentient because it wouldn't serve any purpose for us (it would basically just be like having a really expensive kid). Computers are tools, and we will always use make and use them for that purpose.
Raise Temperature of earth by 2 degrees and bang humans start suffering and start dying
Increase voltage in a circuit by 2 volts and bang robots's cpu is melting
Increase pressure slightly we start getting crushed
Humans can withstand high pressures. At some point, the ordinary air becomes toxic but nitrogen and oxygen can be replaced by helium. If google doesn't lie, 30 atmospheres won't kill you.
Once Robots gain cognitive learning and better collective knowledge, I suggest humans be out of the scene. Robot once developed properly could be much more effective than humans.
Next, robots can never destruct themselves as they follow a set of instruction rather than being outright rebellious.
But then robots can learn to ignore rules because, you know, following rules created by some meat bags is not the best solution in some equation.
Anyway, Humans aren't that weak and any "robot apocalypse" can be stopped by unplugging the power cord.
Our mind may be much faster\better than a supercomputer however, computers still do things faster, in real-world robots\computers are pretty much used everywhere showing that computers are infact still processing more information than the human mind. It does not really matter if theoretically we could be faster, currently computer process information faster in terms of visible & important information for a given purpose.
In next century we will speed up PC much more, and we will increase AI reality and stuff. I think if a Von Neumann civilization is created it could surpass human race in few hundred years.
This won't happen for a rather simple reason- robots don't last forever. Every wire corrodes, every integrated circuit becomes more and more worn. Something always stops working, and then that's the end.
Well, Fredbill robots are much more resistant to bullets and bombs than humans. As for giblit, they can however like asimov laws we could hard code them in the hardware so we can prevent that from happening.
In terms of organic life it is devolution, but evolution is simply to create something perfect in universe and robots\computer are as close as being perfect as robots can be fixed or upgraded much more easily than humans, they can survive in wide environment.
What happens when they rust, or if their battery runs out (if they use solar panels, what if they get stuck in a cave?)? Do they just sit there and die?
Not to mention if you shoot them with a tazer they're finished.
For one, the asimov laws... well, they're a joke. They would not work in any realistic context. Two, if you give a machine an infinite amount of time to learn and a set of restrictions, you have to realize that in that time it will conceive of every possible way to get around those restrictions- you might as well not have them at all. Hard-coded or not. Hell, having pretty much anything hard-coded in a true artificial intelligence is silly, since realistically they should be able to program themselves.
Fredbill, there are alloys which are prone to rusting next they can just use Kevlar or similar strong materials which don't rust.
Next that would mean tazer cannot work cuz they have body armor which is non-conductive. But the batter even if died does no matter because its like saying if human is stuck in desert he can die.
Next, robots can be brung back as they don't die but simply fail but once energy is given again they come back with normal memory.
Not necessarily, if you kill their battery their ram is wiped. They lose all memory unless it's stored on external storage, which is much slower than ram.
RAM = short term memory, HDD = long term memory. You can knock down a human and possibly give him/her a short term memory loss, or long term memory loss... and the change that he/she remembers who he/she is ~ chance of recovery HDD.
"much slower" is still faster than human. "Remembering a complete event" ~ loading a ~4GB video from HDD to RAM ~ seconds.
We have to sleep, robots have to recharge. We need doctor, robots need replacing battery, etc. I think it's fair enough.
Normally, storage will have important info, but however all data would\should be wirelessly transmitted to master server\P2P network. It saves all the hassle.
Besides even RAM is used for current processing and not for data storage, it could be stored in ROM such as info like basic movement and software important. Next, only thing which could be wiped is running calculation.
BUT
I am pretty sure, robots may save data in time gaps and if battery is too low, save it and stop and terminate. Simple.