General Artificial Intelligence Will be more Than Intellect

General Artificial Intellect is a term used to describe the kind of artificial intellect we are hoping to be human like in intellect. We cannot even come up with a perfect definition for Use Cases of Conversational AI intellect, yet we are already on our way to build some of them. The question is whether the artificial intellect we build is useful for us or we work for it.

If we should instead understand the concerns, first we will have to understand intellect and then anticipate where we are in the process. Intellect could be said as the necessary process to make information based on available information. That is the basic. If you can make a new information based on existing information, then you are intelligent.

Since this is much scientific than spiritual, let’s speak in terms of science. I will try to avoid put a lot of scientific verbiage so that a common male or female could understand the content easily. There is a term involved in building artificial intellect. It is called the Turing Test. A Turing test is to test an artificial intellect to see if we could recognize it as a computer or we couldn’t see any difference between that and a human intellect. The evaluation of the test is that if you communicate to an artificial intellect and along the process you forget to remember that it is actually a work system and not a person, then the system passes the test. That is, the machine is really unnaturally intelligent. We have several systems today that can pass this test within a short while. They are not perfectly unnaturally intelligent because we get to remember that it is a work system along the process the gym.

An example of artificial intellect would be the Jarvis in all Iron Man movies and the Avengers movies. It is a system that understands human communications, surmises human natures and even gets frustrated in points. That is what the work community or the html coding community calls a general Artificial Intellect.

To put it up in regular terms, you could communicate to that system like your story do with a person and the system would interact with you like a person. The problem is people have limited knowledge or memory. Sometimes we cannot remember some names. We know that we know the name of the other guy, but we just cannot get it on time. We will remember it somehow, but later at some other instance. This is not called parallel work in the html coding world, but it is something such as that. Our brain function is not fully understood but our neuron functions are mostly understood. This is equivalent to say that we don’t understand computers but we understand transistors; because transistors are the building blocks of all computer memory and function.

When a human can parallel process information, we call it memory. While talking about something, we remember something else. We say “by the way, I didn’t remember to tell you” and then we keep on a different subject. Now imagine the force of work system. They truly appreciate something at all. This is the most important part. As much as their processing capacity grows, the better their information processing would be. We are not like that. It seems that the mental faculties has a limited capacity for processing; in average.

All of those other brain is information storage. Some people have traded in off the skills to be the other way around. You might have met people that are very bad with remembering something but are very good at doing maths just with their head. These people have actually allocated parts of their brain that is regularly allocated for memory into processing. This gives them to process better, but they lose the memory part.

Mental faculties has an average size and therefore there is a limited amount of neurons. Around there are around 100 thousand neurons in an average mental faculties. That is at minimum 100 thousand connections. I will get to maximum number of connections at a later point on this article. So, if we wanted to have approximately 100 thousand connections with transistors, we will need something like thirty-three. 333 thousand transistors. That is because each transistor can contribute to 3 connections.

Coming back to the point; we have achieved that level of work in about 2012. IBM had accomplished simulating 10 thousand neurons to represent 100 trillion synapses. You have to understand that a computer synapse is not a biological nerve organs synapse. We cannot compare one transistor one neuron because neurons are much more complicated than transistors. To represent one neuron we will need several transistors. In fact, IBM had built a supercomputer with 1 million neurons to represent 256 million synapses. To do this, they had 530 thousand transistors in 4096 neurosynaptic cores according to research. ibm. com/cognitive-computing/neurosynaptic-chips. shtml.

Now you can learn how complicated the actual human neuron should be. The problem is we haven’t gotten to build an artificial neuron at a hardware level. We have built transistors and then have incorporated software to manage them. Neither a transistor nor an artificial neuron could manage itself; but an actual neuron can. So the work capacity of a biological brain starts at the neuron level but the artificial intellect starts at more achieable levels after at least several thousand basic units or transistors.

The advantageous side for the artificial intellect is that it is not limited within a skull where it has a location limitation. If you figured out how to connect 100 trillion neurosynaptic cores and had big enough facilities, then you can build a supercomputer with that. You can’t do that with your brain; the human brain is bound to the number of neurons. According to Moore’s law, computers will at some point lead the limited connections that a mental faculties has. That is the critical point of time when the information singularity will be reached and computers become essentially more intelligent than humans. This is the general thought on it. I think it is wrong and I will explain why I think so.

Comparing the growth of the number of transistors in a computer processor, the computers by 2015 should be able to process at the quality of serotonin levels of a mouse; a real biological mouse. We have hit the period and are moving above it. This is about the general computer and not about the supercomputers. The supercomputers are actually a combination of processors connected in a manner that they can parallel process information.

Leave a Reply

Your email address will not be published.