Note: While taking my initial dive into the study of Nick Bostrom’s philosophical paper “Are we living in a computer simulation?” it occurred to me that a direct answer to his questions may have already been reached since his paper was first published in 2003.
A recent interview on this very subject when he was asked if he had found an answer to which of his three possibilities detailed in his Abstract did he favor? He responded that he is more inclined towards one possibility than the other two presented in his paper but did not reveal which one.
Therefore, we continue our personal search for answers while we navigate the information provided in his studies. Do we or do we not live a simulated life in a computer?
Part 3: THE TECHNOLOGICAL LIMITS OF COMPUTATION
At our current stage of technological development, we have neither sufficiently powerful hardware nor the requisite software to create conscious minds in computers. But persuasive arguments have been given to the effect that if technological progress continues unabated then these shortcoming will eventually be overcome. Some authors argue that this stage may only be a couple of decades away.1 Yet present purposes require no assumptions about the time-scale. The simulation argument works equally well for those who think that it will take hundreds of thousands of years to reach a “posthuman” stage of civilization, where humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints.
Such a mature stage of technological development will make it possible to convert planets and other astronomical resources into enormously powerful computers. It is currently hard to be confident any upper bound on the computing power that may be available to posthuman civilizations. As we are still lacking a ” theory of everything”* , we cannot rule out the possibility that novel physical phenomena, not allowed for in current physical theories, may be utilized to transcend those constraints (2) that in our current understanding impose theoretical limits on the information processing attainable in a given lump of matter. We can with much greater confidence establish lower bounds on posthuman computation, by assuming only mechanisms that are already understood. For example, Eric Drexler has outlined a design for a system the size of a sugar cube ( excluding cooling and power supply) that would perform 10@21 per second. (3) Another author gives a rough estimate of 10 @42 operations per second for a computer with a mass on order of a large planet. (4) ( If we could create Quantum computers, or learn to build computers out of nuclear matter or plasma, we could push closer to the theoretical limits. Seth Lloyd calculates an upper bound for a 1 kg computer of 5* 10@ 50 logical operations per second carried out on approximately 10 @31 bits @ 5. However, it suffices for our purposes to use the more conservative estimate that presupposes only currently known design-principles.)
The amount of computing power needed to emulate a human mind can likewise be roughly estimated. One estimate, based on how computationally expensive it is to replicate functionality of pieces of nervous tissue that we have already understood and whose functionality has been replicated in silico *, contrast enhancement in the retina yields a figure of approximately 10 @14 per second for the entire human brain. An alternative estimate, based on the number of synapses in the brain and their firing frequency, gives a figure of approximately 10 @ 16 – 10 @ 17 operations per second. (7 ) Conceivably, even more could be required if we want to simulate in detail the internal workings of synapses and dendritic trees. However, it is likely that the human nervous system has a high degree of redundancy on the mircoscale to compensate for the unreliability and noisiness of its neuronal components. One should therefore expect substantial efficiency gain when using more reliable and versatile non-biological processors.
Memory seems to be a no more stringent constraint than processing power. (8) Moreover, since the maximum human sensory bandwidth is approximately 10 @ 8 per second, simulating all sensory events incurs a negligible cost compared to simulating the cortical activity. We can therefore use the processing power required to simulate the central nervous system as an estimate of the total computational cost of simulating a human mind.
If the environment is included in the simulation, this will require additional computing power – how much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level obviously infeasible, unless radically new physics is discovered. But in order to get a realistic simulation of human experience, much less is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with simulated environment, don’t notice any irregularities . The microscopic structure of the inside of the Earth can be safely omitted. Distant astronomical objects can have highly compressed representations: verisimilitude need extend to the narrow band of properties that we can observe form our planet or solar system spacecraft. On the surface of Earth, macroscopic phenomena could likely be filled ad hoc. What we see through the electron microscope needs to look unsuspicious, but you usually have no way of confirming its coherence with unobserved parts of the microscopic world. Exceptions arise when we deliberately design systems to harness unobserved microscopic phenomena that operate in accordance with known principles to get results that we are able to independently verify. The paradigmatic case of this is a computer. The simulation may therefore need to include an continuous representation of computers down to the level o individual logic elements. This presents no problem, since our current computing power is negligible by posthuman standards.
Moreover, a posthuman simulator would have enough computing power to keep track of the detailed belief-states in all human brains at all times. Therefore, when it saw that a human was about to make an observation of the microscopic world, it could fill in sufficient detail in the simulation in the appropriate domain on an as-nee basis. Should any error occur, the director could easily edit the states of any brains that have become aware of an anomaly before it spoils the simulation. Alternatively, the director could skip back a few seconds and rerun the simulation in a way that avoids the problem.
It thus seems plausible that the main computational cost in creating simulations that are indistinguishable from physical reality for human minds in the simulation resided in simulating organic brains down to the neuronal or sub-neuronal level. (9 ) While it is not possible to get a very exact estimate of the cost of a realistic simulation of human history, we can us approximately 10 @33 minus 10@ 36 operations as a individual synapses. This attenuated version of substrate-independence is quite widely accepted.
Neurotransmitters, nerve growth factors, and other chemicals that are smaller than a synapse clearly play a role in human cognition and learning. The substrate-independence thesis is not that the effects of these chemicals are small and irrelevant, but rather that they affect subjective experiences only via direct or indirect influence on computational activities. For example, if there can be no difference in subjective experience without there also being a difference in synaptic discharges, then the requisite detail of simulation is at the synaptic level ( or higher)
End of Part 4. My keyboard does not have the needed scientific keys to properly represent . Example 10 @ 5 is meant to be a normal size 1o with a small 5 at upper right corner. 10 x 10 x 10 x 10 x 10. =1,000,000.
This Is The Best Offer We’ve Ever Tested All Time.
Resurge is the world’s first and only anti-aging nutritional protocol that targets the true cause of unexplained weight gain, stubborn belly fat and metabolic slowdown.