The world depends on how we sense it; perceive it and how we act is according to our perception of this world. But where from this perception comes? Leaving the psychological part, we perceive by what we sense and act by what we perceive. The senses in humans and other animals are the faculties by which outside information is received for evaluation and response. Thus the actions of humans depend on what they sense. Aristotle divided the senses into five, namely:
These have continued to be regarded as the classical five senses, although scientists have determined the existence of as many as 15 additional senses. Sense organs buried deep in the tissues of muscles, tendons, and joints, for example, give rise to sensations of weight, position of the body, and amount of bending of the various joints; these organs are called proprioceptors. Within the semicircular canal of the ear is the organ of equilibrium, concerned with the sense of balance. General senses, which produce information concerning bodily needs (hunger, thirst, fatigue, and pain), are also recognized. But the foundation of all these is still the list of five that was given by Aristotle.
Our world is a visual world. Visual perception is by far the most important sensory process by which we gather and extract information from our environment. Vision is the ability to see the features of objects we look at, such as color, shape, size, details, depth, and contrast. Vision is achieved when the eyes and brain work together to form pictures of the world around us. Vision begins with light rays bouncing off the surface of objects. Light reflected from objects in our world forms a very rich source of information and data. The light reflected has a short wavelength and high transmission speed that allow us a spatially accurate and fast localization of reflecting surfaces. The spectral variations in wavelength and intensity in the reflected light resemble the physical properties of object surfaces, and provide means to recognize them. The sources that light our world are usually inhomogeneous. The sun, our natural light source, for example, is in good approximation a point source. Inhomogeneous light sources cause shadows and reflections that are highly correlated with the shape of objects. Thus, knowledge of the spatial position and extent of the light source enables further extraction of information about our environment.
Our world is also a world of motion. We and most other animals are moving creatures. We navigate successfully through a dynamic environment, and we use predominantly visual information to do so. A sense of motion is crucial for the perception of our own motion in relation to other moving and static objects in the environment. We must predict accurately the relative dynamics of objects in the environment in order to plan appropriate actions. Take for example the following situation that illustrates the nature of such a perceptual task: the batsman a cricket team is facing a bowler. In order to get the boundary on the ball, he needs an accurate estimate of the real motion trajectory of the ball such that he can precisely plan and orchestrate his body movements to hit the ball. There is little more than just visual information available to him in order to solve the task. And once he is in motion the situation becomes much more complicated because visual motion information now represents the relative motion between him and the ball while the important coordinate frame remains static. Yet, despite its difficulty, with appropriate training some of us become astonishingly good at performing this task. High performance is important because we live in a highly competitive world. The survival of the fittest applies to us as to any other living organism, although the fields of competition might have slightly shifted and diverted during recent evolutionary trends. This competitive pressure not only promotes a visual motion perception system that can determine quickly what is moving where, in which direction, and at what speed; but it also forces this system to be efficient. Efficiency is crucial in biological systems. It encourages solutions that consume the smallest amount of resources of time, substrate, and energy. The requirement for efficiency is advantageous because it drives the system to be quicker, to go further, to last longer, and to have more resources left to solve and perform other tasks at the same time. Thus, being the complex sensory-motor system as the batsman is, he cannot dedicate all of the resources available to solve a single task.
Compared to human perceptual abilities, nature provides us with even more astonishing examples of efficient visual motion perception. Consider the various flying insects that navigate by visual perception. They weigh only fractions of grams, yet they are able to navigate successfully at high speeds through complicated environments in which they must resolve visual motions up to 2000 deg/s.
What applies to biological systems applies also to a large extent to any artificial autonomous system that behaves freely in a real-world environment. When humankind started to build artificial autonomous systems, it was commonly accepted that such systems would become part of our everyday life by the year 2001. Numberless science-fiction stories and movies have encouraged visions of how such agents should behave and interfere with human society. And many of these scenarios seem realistic and desirable. Briefly, we have a rather good sense of what these agents should be capable of. But the construction is still eluding. The semi- autonomous rover of NASA's recent Mars missions or demonstrations of artificial pets are the few examples.
Remarkably the progress in this field is slow than the other fields of electronics. Unlike transistor technology in which explosion of density is defined by the Moore's law and also in terms of the computational powers the performance of autonomous systems is still not to the par. To find out the reason behind it we have to understand the limitation of traditional approaches. The autonomous system is the one that perceives, takes decision and plans action at a cognitive level, in doing so it must show some degree of intelligence. Returning back to the batsman example, he knows exactly what he has to do to dispatch the ball to the boundary, he has to get into a right position and then hit the ball with a precise timing. In this process, the photons hit the retina and then muscle force is applied. The batsman is not aware that this much is going on into his body. The batsman has a nervous system, and one of its many functions is to instantiate a transformation layerbetween the environment and his cognitive mind. The brain reduces and preprocesses the huge amount of noisy sensory data, categorizes and extracts the relevant information, and translates it into a form that is accessible to cognitive reasoning. Thus it is clear here that the there is cluster of process that takes place in a biological cognitive system in a very short time duration. And also that an important part of this whole process is transduction although it is not the one that can solely perform the whole complex task. Thus perception is the interpretationof sensory information with respect to the perceptual goal. The process is shown in the fig-1.
The brain is fundamentally differently organized than a computer and science is still a long way from understanding how the whole thing works. A computer is really easy to understand by comparison. Features (or organization principles) that clearly distinguish a brain from a computer are:
The computer is still a basically serially driven machine with a centralized storage and minimal self organization. The table 1.1 enlists these differences.
Table 1.1 Differences in the organization principles and operation of computer and brain
Computer | Brain |
Serial | Parallel |
One centralized powerful CPU and memory | 10^11 simple distributed computational and memory units |
Busses shared by several components | Dedicated local point to point connections |
Not very power efficient | Very power efficient |
Digital time, discrete | Analog continuous time |
Programmed | Learning |
Sensitive to errors | Robust to errors |
The digital computation may become so fast that it may solve the present problems and also it may become possible that the autonomous systems are made by digital components that are as powerful as efficient and as intelligent as we may imagine in our wildest dreams. However there are doubts in it and so we have to switch to an implementation framework that can realize all these things.
It was Carver Mead who, inspired by the course “The Physics of Computation” he jointly taught with John Hopfield and Richard Feynman at Caltech in 1982, first proposed the idea of embodying neural computation in silicon analog very large-scale integrated (aVLSI) circuits.
Biological neural networks are examples of wonderfully engineered and efficient computational systems. When researchers first began to develop mathematical models for how nervous systems actually compute and process information, they very soon realized that one of the main reasons for the impressive computational power and efficiency of neural networks is the collective computation that takes place among their highly connected neurons. And in researches, it is also well established that these computations are not undertaken digitally although the digital way is much simpler. Real neurons have a cell membrane with a capacitance that acts as a low-pass filter to the incoming signal through its dendrites; they have dendritic trees that non-linearly add signals from other neurons, and so forth. Network structure and analog processing seem to be two key properties of nervous systems providing them with efficiency and computational power, but nonetheless two properties that digital computers typically do not share or exploit.
1. Biological information-processing systems operate on completely different principles from those with which most engineers are familiar. For many problems, particularly those in which the input data are ill-conditioned and the computation can be specified in a relative manner, biological solutions are many orders of magnitude more effective than those we have been able to implement using digital methods. This advantage can be attributed principally to the use of elementary physical phenomena as computational primitives, and to the representation of information by the relative values of analog signals, rather than by the absolute values of digital signals. This approach requires adaptive techniques to mitigate the effects of component differences. This kind of adaptation leads naturally to systems that learn about their environment. Large-scale adaptive analog systems are more robust to component degradation and failure than are more conventional systems, and they use far less power. For this reason, adaptive analog technology can be expected to utilize the full potential of wafer scale silicon fabrication
2. The architecture and realization of microelectronic components for a retina-implant system that will provide visual sensations to patients suffering from photoreceptor degeneration. Special circuitry has been developed for a fast single-chip CMOS image sensor system, which provides high dynamic range of more than seven decades (without any electronic or mechanical shutter) corresponding to the performance of the human eye. This image sensor system is directly coupled to a digital filter and a signal processor that compute the so-called receptive-field function for generation of the stimulation data. These external components are wireless, linked to an implanted flexible silicon multielectrode stimulator, which generates electrical signals for electro stimulation of the intact ganglion cells. All components, including additional hardware for digital signal processing and wireless data and power transmission, have been fabricated using in-house standard CMOS technology
3. The circuits inspired by the nervous system that either help verifying neuron physiological models, or that are useful components in artificial perception/action systems. Research also aims at using them in implants. These circuits are computational devices and intelligent sensors that are very differently organized than digital processors. Their storage and processing capacity is distributed. They are asynchronous and use no clock signal. They are often purely analog and operate time continuous. They are adaptive or can even learn on a basic level instead of being programmed. A short introduction into the area of brain research is also included in the course. The students will learn to exploit mechanisms employed by the nervous system for compact energy efficient analog integrated circuits. They will get insight into a multidisciplinary research area. The students will learn to analyze analog CMOS circuits and acquire basic knowledge in brain research methods.
4. Smart vision systems will be an inevitable component of future intelligent systems. Conventional vision systems, based on the system level integration (or even chip level integration) of an image (usually a CCD) camera and a digital processor, do not have the potential for application in general purpose consumer electronic products. This is simply due to the cost, size, and complexity of these systems. Because of these factors conventional vision systems have mainly been limited to specific industrial and military applications. Vision chips, which include both the photo sensors and parallel processing elements (analog or digital), have been under research for more than a decade and illustrate promising capabilities.
5. Dr. Carver Mead, professor emeritus of California Institute of Technology (Caltech), Pasadena pioneered this field. He reasoned that biological evolutionary trends over millions of years have produced organisms that engineers can study to develop better artificial systems. By giving senses and sensory-based behavior to machines, these systems can possibly compete with human senses and brings an intersection between biology, computer science and electrical engineering. Analog circuits, electrical circuits operated with continuous varying signals, are used to implement these algorithmic processes with transistors operated in the sub-threshold or weak inversion region (a region of operation in which transistors are designed to conduct current though the gate voltage is slightly lower than the minimum voltage, called threshold voltage, required for normal conduction to take place) where they exhibit exponential current voltage characteristics and low currents. This circuit paradigm produces high density and low power implementations of some functions that are computationally intensive when compared with other paradigms (triode and saturation operational regions). {A triode region is operating transistor with gate voltage above the threshold voltage but with the drain-source voltage lower than the difference between the gate-source voltage and threshold voltage. For saturation region, the gate voltage is still above the threshold voltage but with the drain-source voltage above the difference between the gate-source voltage and threshold voltage. Transistor has four terminals: drain, gate, source and bulk. Current flows between the drain and the source when enough voltage is applied through the gate that enables conduction. The bulk is the body of the transistor.}. As the systems mature, human parts replacements would become a major application area of the Neuromorphic electronics. The fundamental principle is by observing how biological systems perform these functions robust artificial systems are designed.
6. In This proposed work a circuit level model of Neuromorphic Retina, this is a crude electronic model of biologically inspired smart visual sensors. These visual sensors have integrated image acquisition and parallel processing. Having these features neuromorphic retina mimics the neural circuitry of bionic eye. The proposed electronic model contains adaptive photoreceptors as light sensors and other circuit components such as averaging circuits, circuits representing ganglion cells, neuronal firing circuits etc that junction to sense brightness, size, orientation and shape to distinguish objects in closer proximity. Although image-processing features are available with modern robots but most of the issues related to image processing are taken care by software resources. Whereas machine vision with the help of neuromorphic retina is empowered with image processing at the front end. With added hardware resources, processing at the front end can reduce a lot of engineering resources for making electronic devices with sense of vision.
This project work describes a circuit level model of Neuromorphic Retina, which is a crude electronic model of biologically inspired smart visual sensors. These visual sensors have integrated image acquisition and parallel processing. Having these features neuromorphic retina mimics the neural circuitry of bionic eye. The proposed electronic model contains adaptive photoreceptors as light sensors and other neural firing circuits etc at junction to sense brightness, size, orientation and shape to distinguish objects in closer proximity. Although, image processing features are available with modern robots but most of the issues related to image processing are taken care by software resources. Whereas, machine vision with the help of neuromorphic retina is empowered with image processing at the front end. In this paper it has been shown that with added hardware resources, processing at the front end it can reduce a lot of engineering resources as well as time for making electronic devices with sense of vision. . The objectives of present work are:
Modelling of Neuromorphic Retina
In this chapter, the function of the artificial system, difference between brain and computer work is described. The present work is focused on designing of neuromorphic retina layer circuits. Many successful studies have been carried out by the researchers to study the behavior and failure of neuromorphic retina. Some investigators have performed the experimental work to study the phenomenon of the neuromorphic retina.
Chapter 2 conations the biological neurons and the electronics of neuromorphic retina in this the descriptions of silicon neurons, electrical nodes as neurons, perceptrons, integrate & fire neurons, biological significance of neuromorphic systems, neuromorphic electronics engineering methods, process of developing a neuromorphic chip. Chapter 3 describes the artificial silicon retina, physiology of vision, the retina, photon to electrons, why we require the neuromorphic retina?, the equivalent electronic structure, visual path to brain. In chapter 4 designing and implementation of neuromorphic retina in this the description of the photoreceptor block, the horrizontal cell block, the integerated block, the integrated block of photoreceptors, horizontal cells and bipolar cells, the spike generation circuit. In chapter 5 the design analyses and test results of neuromorphic retina layers. The results are summarized in the form of conclusion in Chapter 6
Neuromorphic systems are inspired by the structure, function and plasticity of biological nervous systems. They are artificial neural systems that mimic algorithmic behavior of the biological animal systems through efficient adaptive and intelligent control techniques. They are designed to adapt, learn from their environments, and make decisions like biological systems and not to perform better than them. There are no efforts to eliminate deficiencies inherent in biological systems. This field, called Neuromorphic engineering, is evolving a new era in computing with a great promise for future medicine, healthcare delivery and industry. It relies on plenty of experiences which nature offers to develop functional, reliable and effective artificial systems. Neuromorphic computational circuits, designed to mimic biological neurons, are primitives based on the optical and electronic properties of semiconductor materials
Biological neurons have a fairly simple large-scale structure, although their operation and small-scale structure is immensely complex. Neurons have three main parts: a central cell body, called the soma, and two different types of branched, treelike structures that extend from the soma, called dendrites and axons. Information from other neurons, in the form of electrical impulses, enters the dendrites at connection points called synapses. The information flows from the dendrites to the soma, where it is processed. The output signal, a train of impulses, is then sent down the axon to the synapses of other neurons. The dendrites send impulses to the soma while the axon sends impulses away from the soma. Functionally, there are three different types of neurons:
It has a cell body (or soma) and root-like extensions called mygdale. Amongst the mygdale, one major outgoing trunk is the axon, and the others are dendrites. The signal processing capabilities of a neuron is its ability to vary its intrinsic electrical potential (membrane potential) through special electro-physical and chemical processes. The portion of axon immediately adjacent to the cell body is called axon hillock. This is the point at which action potentials are usually generated. The branches that leave the main axon are often called collaterals. Certain types of neurons have axons or dendrites coated with a fatty insulating substance called myelin. The coating is called the myelin sheath and the fiber is said to be myelinated. In some cases, the myelin sheath is surrounded by another insulating layer, sometimes called neurilemma. This layer, thinner than the myelin sheath and continuous over the nodes of Ranvier, is made up o thin cells called Schwann cells.
Now, how do these things work? Inside and just outside of the neurons are sodium ions (Na+) and potassium ions (K+). Normally, when the neuron is just sitting not sending any messages, K+ accumulate inside the neuron while Na+ is kicked out to the area just outside the neuron. Thus, there is a lot of K+ in the neuron and a lot of Na+ just outside of it. This is called the resting potential. Keeping the K+ in and the Na+ is not easy; it requires energy from the body to work. An impulse coming in from the dendrites, reverses this balance, causing K+ to leave the neuron and Na+ to come in. This is known as depolarization. As K+ leave Na+ enter the neuron, energy is released, as the neuron no longer is doing any work to keep K+ in and Na+ out. This energycreates an electrical impulse or action potential that is transmitted from the soma to axon. As the impulse leaves the axon, the neuron repolarizes, that is it takes K+ back in and kicks Na+ out and restores itself to resting potential, ready to send another impulse. This process occurs extremely quickly. A neuron theoretically can send roughly 266 messages in one second. The electrical impulse may stimulate other neurons from its synaptic knobs to propagate the message.
Experiments have shown that the membrane voltage variation during the generation of an action potential is generally in a form of a spike (a short pulse - figure 2.2), and the shape of this pulse in neurons is rather stereotype and mathematically predictable.
Neuromorphic engineers are more interested in the physiological rather than the anatomical model of a neuron though, which is concerned with the functionality rather than only classifying its parts. And their preference lies with models that can be realized in aVLSI circuits. Luckily many of the models of neurons have always been formulated as electronic circuits since many of the varying observables in biological neurons are voltages and currents. So it was relatively straight forward to implement them in VLSI electronic circuits.
There exist now many aVLSI models of neurons which can be classified by their level of detail that is represented in them. A summary can be found in table 3.1. The most detailed ones are known as ‘silicon neurons'. A bit cruder on the level of detail are ‘integrate and fire neurons' and even more simplifying are ‘Perceptrons' also known as ‘Mc Culloch Pitts neurons'. The simplest way however of representing a neuron in electronics is to represent neurons as electrical nodes.
Table 2.1 VLSI models of neurons
Electrical nodes | Most simple, big networks implementable |
Perceptrons | Mathematically simple but difficult to implement in a VLSI |
Integrate and fire neurons | Mathematically complex but simple in a VLSI |
Compartmental networks | Complex, simulation of big networks are very slow |
The most simple of all neuronal models is to just represent a neuron's activity by a voltage or a current in an electrical circuit, and input and output are identical, with no transfer function in-between. If a voltage node represents a neuron, excitatory bidirectional connections can be realized simply by resistive elements between the neurons. If you want to add the possibility for inhibitory and mono directional connections, followers can be used instead of resistors. Or if a current represents neuronal activity then a simple current mirror can implement a synapse. Many useful processing networks can be implemented in this manner or in similar ways. For example a resistive network can compute local averages of current inputs.
A perceptron is a simple mathematical model of a neuron. As real neurons it is an entity that is connected to others of its kind by one output and several inputs. Simple signals pass through these connections. In the case of the perceptron these signals are not action potentials but real numbers. To draw the analogy to real neurons these numbers may represent average frequencies of action potentials. The output of a perceptron is a monotonic function (referred to as activation function) of the weighted sum of its inputs (see figure 3.3). Perceptrons are not so much implemented in analog hardware. They have originally been formulated as a mathematical rather than an electronic model and traditional computers are good at those whereas it is not so straight forward to implement simple mathematics into aVLSI. Still there exist aVLSI implementations of perceptrons since they still promise the advantage of a real fully parallel, energy and space conservative implementation.
A simple aVLSI implementation of a perceptron is given in the schematics in figure 3.4. This particular implementation works well enough in theory, in practice however it is on one hand not flexible enough (particularly the activation function), on the other already difficult to tune by its bias voltages and prone to noise on the a chip. Circuits that have really been used are based on this one but were more extensive to deal with the problems.
This model of a neuron sticks closer to the original in terms of its signals. Its output and its inputs are pulse signals. In terms of frequencies it actually can be modeled by a perceptron and vice versa. It is however much better suited to be implemented in aVLSI. And the spike communication also has distinct advantages in noise robustness. That is also thought to be a reason, why the nervous system uses that kind of communication. An integrate and fire neuron integrates weighted charge inputs triggered by presynaptic action potentials. If the integrated voltage reaches a threshold, the neuron fires a short output pulse and the integrator is reset. These basic properties are depicted in figure 2.5.
The fundamental philosophy of neuromorphic engineering is to utilize algorithmic inspiration of biological systems to engineer artificial systems. It is a kind of technology transfer from biology to engineering that involves the understanding of the functions and forms of the biological systems and consequent morphinginto silicon chips. The fundamental biological unit mimicked in the design of neuromorphic systems is the neurons. Animal brain is composed of these individual units of computation, called neurons and the neurons are the elementary signaling parts of the nervous systems.
By examining the retina for instance, artificial neurons that mimic the retinal neurons and chemistry are fabricated on silicon (most common material), gallium arsenide (GaAs) or possibly prospective organic semiconductor materials.
Neuromorphic systems design methods involves the mapping of models of perfection and sensory processing in biological systems onto analog VLSI systems which emulate the biological functions at the same time resembling their structural architecture. These systems are mainly designed with complementary metal oxide semiconductors (CMOS) transistors that enable low power consumption, higher chip density and integration, lower cost. These transistors are biased to operate in the sub-threshold region to enable the realizations of high dynamic range of currents which are very important for neural systems design.
Elements of adaptation and learning (a sort of higher level of adaptation in which past experience is used to effectively readjust the response of a system to previously unseen input stimuli) are incorporated into neuromorphic systems since they are expected to emulate the behavior of the biological systems and compensate for imperfections in the physical implementation and changes in the environment where they operate. Imperfections in physical imperfections could come via circuit elements mismatches while noise and random error could result from environment.
These adaptation and learning implementations are done with analog systems that drive the realization of efficient neural systems based on parallel distributed architectures for low-power, real-time and robust operation. Fig. 4.1 shows an illustration of an adaptation by which the adaptive element is used to adjust the component p of the system to reduce the error index (Error(p)). Neuromorphic systems designed with high parallelism and connectivity permit structures with massive feedback systems.
The neuromorphic engineering could be divided into neuromorphic modeling, reproducing neuro-physiological phenomena to increase the understanding of the nervous systems and neuromorphic computationwhich uses the neuronal properties to build neuron like computing hardware. The circuits then pass through all the stages of developing integrated circuit (or chip), which involves the circuit layout, verification, fabricationin foundry and testing and subsequent deployment. A brief explanation of each of these steps is provided as follows:
Layout Design: This stage involves the translation of the circuit realized into silicon description through geometrical patterns aided by CAD tools. This translation process follows a process rule that specifies the spacing between, wire transistors, wire contacts and so on. The layout is designed to represent the electrical circuit schematics obtained from the algorithm.
Fabrication: Upon satisfactory verification of the design, the layout is sent to the foundry where it is fabricated. The process of chip fabrication is very complex. It involves many stages of oxidation, etching, photolithography, etc. Typically, the fabrication process translates the layout into silicon or any other semiconductor material that is used.
Testing: The final stage of the chip development is called testing. Electronic equipment like oscilloscopes, probes, and electrical meters are used to measure some parameters of the chip to verify its functionalities based on the chip specifications.
A vision chip that faithfully mimics the neural circuitry of a real retina could lead to better bionic eye for those with loss. Vision chip is replica of mammalian retina as its blueprint. Chip contains light sensors and circuit components that junctions to sense brightness, size orientation and shape to distinguish objects.
This vision chip as a neuroprosthetic that can be successfully implanted and could be embedded directly into the eye and connected to the nerves that carry signals to visual cortex of the brain. This silicon retina having the potential to help human with damaged vision is also helpful for robots to realize the concept of machine vision. Although image processing is available with the modern robots but most of the issues are taken care by software resources. Machine vision with the help of vision chips can process image information at the front end. With added hardware resources, processing at the front end can reduce a lot of engineering resources for making the intelligent devices.
The very first step towards the creation of artificial vision was taken by Dr. Mark Humayun in 1988. He demonstrated that a blind person could be made to see light by stimulating the nerve ganglia behind the retina with an electrical current. This test proved that the nerves behind the retina still functioned even when the retina had degenerated. Based on this information, scientists set out to create a device that could translate images and electrical pulses that could restore vision.device is very close to becoming available to the millions of people who have lost their vision to retinal disease in the form of Artificial Silicon Retina 'ASR'. This device is a ray of light for people suffering from vision loss with these features
Artificial Silicon Retina having hundreds of micro-photo-diodes, designed of concert light energy for visual frame into thousands of tiny electrical impulse to stimulate the remaining functional cells of the retina in patients having age related macular degeneration and retinitis pigmeatora. Surgically implanted artificial retina under the retina, in a location known as sub retinal space. Silicon retina is designed to produce visual signal similar to those produced by the photoreceptors in a biological retina. [4]. Photoelectric signals generated in ASR are in a position to induce biological visual signals in the remaining retinal cells which used for processed and sent via the ganglia and optic nerve to the brain.
The eye is the visual window of the brain. It is an optical instrument marvel and an amazing bioelectrochemical computer. Light enters the eye and focuses on the retina, and an amazing process then begins. The optical structure of the eye is similar to a fully automatic camera that has a lens focusing on a photographic film. The camera's lens, diaphragm, and film directly correspond to the eye's lens, iris, and retina.
The retina consists of a dense matrix of photoreceptors of which there are two distinct types, according to their shape: rods and cones. From electron microphotographs, we can see that the rods are tubular and larger than the cones. Rod cells form black-and-white images in dim light, and cones mediate color vision. Rods are activated by very few photons and thus mediate vision in dim light; cones sense color, are richer in spatial and temporal detail, and need many photons to be activated. The human retina has three kinds of cones. Each contains a pigment that absorbs strongly in the short (blue), middle (green), or long (red) wavelength of the visible spectrum. This difference in color absorption of the three cone pigments provides the basis for color vision. Color television capitalizes on this fact, and the sensation of many colors is created by synthesis of these three fundamental colors. [5] Conversion from light energy to electrochemical energy in the photoreceptors is a highly complex. The complex structure of the retina consists of cells arranged in layers of differently specialized neurons with numerous interconnections between them. The eye's rods and cones convert photonic signals into electrochemical signals.
The retina which is the photo-sensitive part of the eye, what would be the film in a photo camera, is not uniform. We only perceive a clear color camera-like picture in the very center of our visual field. With distance from the center the resolution and the color perception decreases and the light and motion sensitivity increases.
The image is further processed before it is sent on to the Thalamus and the brain. There is some horizontal interactions going on between the nerve cells on top of the photo receptors that help adapt cells to the average light level (Example: looking out a window we can see features outside and inside at the same time, although the illumination levels are some orders of magnitude different.
A camera can only adjust its shutter to either the inside light level (with the window appearing one brilliantly white spot) or to the outside light level (with the inside as a uniform dark frame) and to enhance edges (i.e. if you look at walls in a white room that meet in a corner, one wall will be more bright and the other more dark because of the angle of the main source of illumination. And very close to the corner, you will perceive the brighter wall becoming brighter still, and the darker more dark. Thus, the contrast between them is enhanced in your eye, but physically the brightness is not really changing towards that corner).
A conceptual model that shows some of the nerve cells that lie on top of the actual photo receptors could look something like figure 3.2. The photo receptors (rods (peripheral vision, not color- but very illumination-sensitive: better for night vision) and cones (color sensitive, concentrated in the center of the visual field)) are excited by light that is focused on them through the eye lens. They adapt or tire when stimulated and the signal they are sending out is thus attenuated when presented with a constant stimulus.
The photo receptors in turn excite so called horizontal cells that actually collect input from a group of photo receptors and from their colleagues. Such their own activity reflects the collective average light level of a certain neighbourhood. The difference between that collective light level and the local light level is computed in the bipolar cells.
Proteins are key ingredients for the response of rods and cones. In the absence of light, there is high concentration of cyclic guanosine monophosphate (cGMP), a chemical transmitter that binds to the pores of the surface membrane and keeps them open, allowing sodium to enter. To maintain the ionic equilibrium, the membrane continually pumps the sodium ions out. Rods contain the reddish protein rhodopsin (Rhodopsin turns the retina or salt ponds purple) in disks that absorb photons singly and contribute to the initial response of a chain of events that underlies vision. Rhodopsin has two components, ll-cis-retinal and opsin.An organic molecule derived from vitamin A, 11-ds-retinal, (ll-cis) is isomerized when light falls on it (i.e., it changes shape but retains the same number of atoms). Opsin is a protein that can act as an enzyme in the presence of the isomerized 1 l-cis. When light falls on a rod, it is absorbed by its rhodopsin in a disk, and the 1 l-cis is isomerized. The isomerized 1 l-cis triggers the enzymatic activity of the opsin. Then the active opsin catalytically activates many molecules of the protein transducin. The activated transducin molecules in turn activate the enzyme phosphodiesterase,which cleaves cGMP by inserting a water molecule into it.
This process is known as hydrolysis.Each enzyme molecule can cleave several thousand cGMPs, which now are not capable of keeping the membrane pores open. Thus, many pores close and the concentration of cGMP drops, reducing the permeability of the membrane and thus the influx of sodium. This causes the negative polarization of the cell interior to increase the cell is hyperpolarized and the generated action potential to travel down to the axonic endings. Thus, this chemical reaction behaves like a chemical photomultiplier.Subsequent to this a restoration process begins: the cGMP is restored and attached to the membrane pores, which reopen, and the transduc in and rhodopsin are deactivated so that the cycle may repeat. Each rod contains about 100 million rhodopsin molecules. One photon is capable of activating one rhodopsin molecule, which eventually triggers an action potential. Obviously, the more photons absorbed, the stronger the action potential.
A group of diseases that may affect the retina is the Retinitis Pigmentosa (RP) and the Age-related Macular Degeneration (AMD). These diseases are mygdaleded by a gradual breakdown and degeneration of the photoreceptor cells. Depending on which type of cell is mainly affected, the symptoms vary, and include night blindness, lost peripheral vision (tunnel vision) and loss of the ability to discriminate colour. Symptoms of RP are most often mygdaled in adolescents and young adults, with progression of the disease usually continuing throughout the individual's life. The rate of progression and degree of visual loss are variable. So far, there is no known cure for RP. However, intensive research is currently under way to discover the cause, prevention and treatment. At this time, RP researchers have identified a first step in managing RP: certain doses of vitamin A have been found to slightly slow the progression of the disease in some individuals. Researches have also found some of the genes that cause RP. There are other inherited retinal degenerative diseases that share some of the clinical symptoms of RP. Some of these conditions are complicated by other symptoms besides the loss of vision. The most common of these is Usher Syndrome, which causes both hearing and vision loss. Other rare syndromes that researchers are studying include Bardet-Biedl syndrome, Best Disease, Leber Congenital Amaurosis and Stargardt Disease.
In order to cure retinal diseases the possible approaches are the subretinal implant and the epiretinal implant. These two methods differ because they substitute different physiological functions. An epiretinal implant stimulates directly the ganglion cells. The device generates spike trains at defined sites of the retina. The epiretinal device does not rely on the natural data processing of the neural compartments in the retina. Hence, the epiretinal approach requires an encoder for mapping visual patterns onto pulse trains as inputs for electronic stimulation. A subretinal implant is meant to replaces the degenerated photoreceptors with photodiodes and electrodes. Hence, the technical implant must provide an analog signal to the adjacent neural layers. In this case the neural retina must be partly intact and it must be able to maps the visual pattern into pulse trains. The signals are processed and converged in the functional neural layers of the retina before they are transmitted through the optic nerve into the visual cortex.
The schematic diagram shows the functions of the cells of the retina as blocks. The photoreceptors that are rods and cones are light detectors and they work as transducers and when they are activated by some light intensity or different colors of light it generates the action potential. This action potential is provided then to the horizontal cells which are named so because of their horizontal connections and they take the average of all the stimuli. One input from the photoreceptors also goes to the bipolar cells. The bipolar cells work as difference calculators that take the difference of the larger input from the smaller one so that the result is always positive.
Thus the difference of the individual photoreceptor output and the averaged output of horizontal cells is provided as the input to the ganglion cells which are nothing but the spike generating units. These spike form impulses then flow from to the occipital lobe of the brain through the optical nerve. The brain then interprets the information. Although there are 120 million rods and 7 million cones, there are only 1 million ganglions and optic-nerve fibers.
In the central fovea region there is a one-to-one connection between cones, bipolars, and ganglions, whereas in other regions of the retina many rods and cones synapse with one bipolar. The one-to-one connection explains the superb resolution of the fine features of a scene. Horizontal neurons connect rods and cones with bipolars. They are inhibitory in function and provide feedback from one receptor to another, adjusting their response so that the retina can deal with the dynamic range of light intensities that far exceed the dynamic range of individual neurons. Remarkably, vision responds to both sunlight and starlight, a range of 10 billion.
The following sections are dedicated to the understanding of each block and the way in which their function can be morphed effectively using the concept of neuromorphic engineering.
Most of the optic-nerve fibers derived from the ganglions terminate in the lateral geniculate nucleus (LGN) in the brain as shown in the figure 5.5. The LGN neurons project their axons directly to the primary visual cortex via the optic radiationsregion. From there, after several synapses, the messages are sent to destinations adjacent to the cortical areas and other targets deep in the brain. One target area even projects back to the LGN, establishing a feedback path.
Each side of the brain has its own LGN and visual cortex. The optic nerve from the left eye and that from the right eye cross in front of the LGNs at the optic chiasm.At the chiasm, part of the left optic nerve is directed to the right side of the brain and the other part to the left side of the brain, and similarly for the right optic nerve. As a result of the optic chiasm, the LGN and the visual cortex on the left side are connected to the two left-half retinas of both eyes and are therefore concerned with the right half of the visual scene; the converse is true for the right LGN and visual cortex.
The pathways terminate in the brain's limbic system,which containsthe hippocampusand the mygdale,which have important roles in the memory—in fact; they appear to be the crossroads of memories. Memories from the present and the past and from various sensory inputs meet and associate there, leading to the development of emotions and, perhaps, invention. The mygdale and the hippocampus seem to be coequal concerning memory, especially recognition.
The LGN contains two types of opponentneurons, nonopponentand spectrally opponent.Nonopponent neurons process light intensity information. Spectrally opponent neurons process color information. The visual cortex registers a systematic map of the visual field so that each small region of the field activates a distinct cluster of neurons that is organized to respond to specific stimuli. There are four types of neurons here. Simpleneurons respond to bars of light, dark bars or straight-line edges in certain orientations and in a particular part of the visual field. Complex neurons respond like simple neurons but independently of the position of lines in the visual field. They also respond to select directions of movement. Hypercomplexneurons respond to the stimuli of lines of certain length and orientation. Super hyper complexneurons respond to edges of certain width that move across the visual field and some respond to corners. The visual cortex is organized into columns where neurons in a given column have similar receptive fields. Thus, one column might respond to vertical lines, another to motion to the right, and so on.
The pathway extends into the inferior temporal cortex.Distinct cortical stations are connected in various sequences along the pathway. Neurons here respond to more complex shapes. Each neuron receives data from large segments of the visual world and responds to progressively more complex physical properties such as size, shape, color, and texture until, in the final station, the neurons synthesize a complete representation of the object. Thus, one concludes with a high degree of confidence that the visual system seems to have a pyramidal hierarchical structure whereby an elemental set of features is extracted first, and from this more complex patterns are extracted, and so on. They are then combined with color information, movement, and their direction and their relative spatial relationship and are finally stored and associated with other features.
The designing of the neuromorphic retina is being done on the basis of the theory stated in the previous chapters. The electronic photoreceptors are the circuits which are sensitive to incident light intensity. This sensitivity to changes in the intensity is achieved by providing feedback of filtered version of output back to the input. The feedback loop consistes a hysteric element. Neuromorphic imager circuits make use of the fact that CMOS photo sensors can be co-located on the same substrate as processing circuitry.
Thus, they are perfectly suited to mimic those local computations of the retina. Photo active structures in semiconductors make use of the fact that photons lead to impact ionizations in the material. Thus, an electron is set free from an atom in the conductor and a electron-hole pair is created: a negative and a positive carrier charge. Normally those two charges recombine immediately again, unless there is an electric field within the material separating them for good, in which case there will be a current of these charges owing along that electric field.
The gain providing elements are also required in electronics as the outputs from the photo detectors is so less in the magnitude that it cannot be used for further processing. The photo current is to be amplified in such a way that the value of amplification factor is high for the case when the input light is low that is the natural output of the photo detector is less, and it must be low (can be stated as not so high) in case the input light intensity is low that is the nominal output of the photo detector is high in value. This is called the adaptive property of the photo detector circuitry. The current amplification can be carried out in the following three ways:
The block diagram that we will refer all through this designing process is the one that was shown in chapter-3. Thus we design the blocks in such a way that they may be integrated afterwards or we design the integrated blocks itself.
The photoreceptor block is designed with the use of a simple BJT in place of the photo detector as photo detectors are not vailable in the multisim 8.0. So we have to make use of the emitter current that flows from the transistor. This emitter current is amplified with the help of J-FET source follower circuitry. The feedback amplifier is used to have a controlled gain. The typical outputs of the circuit shown here, at the emitter of the BJT after source follower amplification are 9.363mA and at the output of the op-amp is 25.269mA. The outputs are shown in the figure 4.2.
The horizontal cells are the one that calculate the average of the pixel outputs so as to have value of luminance of the environment. The horizontal cells are the point from where all the pixels or the photoreceptor cells remain interconnected to each other. We can use resistive mesh network for the averaging current. In this circuit each node is fed with the amplified value from each of the pixel, the circuit then takes the average of all the values and after some time lapse each node has same value on it and that value is the average of the current values inputted by many pixels. Here, we have used a 5 pixel structure while there are around 1.2 million pixels in the real retina of the eye. The silicon retina [8]is a system built from our analog functional building blocks. It illustrates many of the properties of neural systems. This is shown in the figure 7.3.
The model for the retina of each type of animal is different, but we have conserved the gross structure of vertebrate retina in our design of the silicon retina. The circuit generates, in real time, outputs that correspond to signals observed in biological retinas, and exhibits a tolerance for device imperfection. The cells in the first layers of the retina are shown in Fig. 4. Light is transduced into an electrical signal via the photoreceptors at the top. The primary pathway proceeds vertically from the photoreceptors through the triad synapse to the bipolar cells and then to the ganglion cells. This pathway intersects two horizontal pathways: the horizontal cells of the outer plexiform layer and the amacrine cells of the inner plexiform layer. The triad synapse is the point of contact among the photoreceptor, the bipolar cell, and the horizontal network. In just a few layers of cells in the retina, a remarkable amount of computation is done. The image becomes independent of the absolute light level and as the retina adapts to a wide range of viewing conditions, it enhances edges and emphasizes time derivatives. Aschematic drawing of the silicon retina is shown in Fig. 6.The horizontal network is modeled as a resistive network.
Further we have developed the model the photoreceptor, the bipolar cell, and the horizontal cell as shown Fig. 4.4. A wide range amplifier provides a conductance through which the resistive network is driven towards the photoreceptor output potential. The horizontal cells form a network that averages the photoreceptor output spatially and temporally.
A second amplifier senses the voltage difference across the conductance, and generates an output proportional to the difference between the photoreceptor output and the network potential at that location. The bipolar cells output is thus proportional to the difference between the photoreceptor signal and the horizontal cell signal. Each photoreceptor in the network is linked with resistive elements. By using a wide-range amplifier in place of a bidirectional conductance, we make the photoreceptor an effective voltage source that provides input into the resistive network. The model consists of an array of pixels and a scanning arrangement for reading the results of the retinal processing. The output of any pixel can be accessed through the multimeters.
In this circuit each node is fed with the amplified value from each of the pixel, the circuit then takes the average of all the values and after some time lapse each node has same value on it and that value is the average of the current values inputted by many pixels. Here we have used a 16 pixel structure while there are around 1.2 million pixels in the real retina of the eye. The resistive mesh can also be implemented as an all CMOS structure. In this structure as shown in figure 4.6 the n-MOS and p-MOS are used as resistors.
After the modeling of photoreceptors (PR), Horizontal Cells (HC) and bipolar cells (BC), the outputs of the multimeters are taken both in voltage as well as in current and they gave the results shown in Fig.4.4. The current outputs are of particular interest for making it the input to the next stage. The current outputs at each pixel of this circuit are shown in Fig4.4. The output at the bright pixel is around 18mA to 19mA and that for the dark pixel is around - 4mA to -7mA.
Third pixel in Resistive mesh Fifth pixel in the resistive mesh
(Brightness) (Darkness)
The outputs show a constant variation for the test condition in voltage with respect to time. The pixel structure of first and third gave the 4 volts constant output with respect to time Fig. 4.5 (A) and the pixel structures of second, fourth and fifth gives the zero voltage with respect to time Fig 4.5 (B). The first and third pixel has shown the darkness and remaining second, fourth and fifth shown darkness of image.
The photoreceptor, the horizontal cells, and the bipolar cells in the triad synapse interact in a center surround organization. In this organization, the output of any pixel in resistive network can be accessed through the oscilloscope.
The scanners can be operated in one of two modes: static probe or serial access.
I. In static probe mode, a single row and column are selected, and the output of a single pixel is observed as a function of time.
II. In serial access mode, both vertical and horizontal shift registers are clocked at regular intervals to provide a sequential output of the processed image for display on oscilloscope.
Fig. 4.4 shows the average output of voltage in operating point of the bipolar cell output of both the biological and silicon retinas as a function of surround illumination. At a fixed surround illumination level, the output of the bipolar cell saturates to produce a constant voltage output at very low or very high center intensities, and it is sensitive to changes in input over the middle of its range. Using the potential of the resistive network as a reference, it centers the range over which the output responds on the signal level averaged over the five pixels.
Further we have developed the model the photoreceptor, the bipolar cell, and the horizontal cell using transistors as shown Fig. 4.7. A wide range amplifier provides a conductance through which the transistor network is driven towards the photoreceptor output potential. The horizontal cells form a network that averages the photoreceptor output spatially and temporally.
The outputs show in Fig 4.8 a constant variation for the test condition in voltage with respect to time. The pixel structure of first is shown is red lines which is 1.4V with respect to time and second, third, fourth and fifth are shown with blue, green, pink and sky blue respectly.
The photoreceptor, the horizontal cells, and the bipolar cells in the triad synapse interact in a center surround organization. In this organization, the output of any pixel in resistive network can be accessed through the oscilloscope.
The integrated block of the photo detector, amplification unit and the bipolar cell is shown in the figure 7.4. The bipolar cells are nothing but differencing units and just calculate the difference of individual pixel output from the average output. In the real retina it is advanced in such a way that its outputs are always positive that is it always subtract the lower value from the higher one. Nevertheless our work does not need to ponder over that because all the functionalities remain intact either we have negative outputs or not.
In the shown figure the output of the U2 op-amp is taken to the averaging block and average output comes to the same node after some time lapse, which is then subjected to subtraction process of the bipolar cell.
In this figure each pixel has the structure as shown in the figure 4.7 and the output of each of them is taken to the averaging block from where the average output comes to it and then the difference output is at the line out0 of each pixel. We have used a voltage of 3V at each pixel input for representing a bright pixel and a voltage of 0V to represent a dark pixel. The multimeters are attached at random to test the outputs of each bipolar cell which is integrated inside each pixel. The outputs of these are shown in figure 4.8.
It can be clearly seen from the outputs that at a dark pixel the output comes out to be -17.35mA and that for the bright pixel it comes out to be 14.98mA to 15.43mA. These outputs are in agreement with the one shown in the individual blocks. We have only taken the current outputs here as these are only which are of interest for the next stage, which is ‘Integrate and Fire Neuron'.
As we have already discussed in the chapter-3 that the neurons communicate among themselves and with other parts of the body by the special action potentials that they generate on being excited. This exciting of neurons is from the stimulus that is applied externally or by some other neurons. The other neurons may inhibit or excite the neurons. The action potential has a shape of spike. A spike is a voltage of short duration. Likewise the communication between the retina and the brain is through spike train that is generated by ganglion cells and flows via optic nerve to the brain. At a time many spike trains flow through the optic nerve without interfering with each other. An appropriate model for the spike generation is ‘integrate and fire neuron' with frequency adaptation. The circuit for ‘integrate and fire neuron' with frequency adaptation is shown in the figure 4.9.
Integrate and fire (I&F) circuits typically integrate small currents onto a capacitor until a threshold is reached. As the voltage on the capacitor exceeds the threshold a fast digital pulse is generated to signal the occurrence of a spike, or event, and the capacitor is reset. These circuits are generally integrated into large arrays, on neuromorphic devices that implement networks of spiking neurons, or that use spiking elements to transmit sensory signals to other neuromorphic processing elements.
The asynchronous communication protocol used to interface neuromorphic devices, containing spiking elements, is based on the Address-Event Representation(AER). In this representation input and output signals (address-events) are sent from/to VLSI devices using stereotyped non-clocked pulse-frequency modulated signals that encode the address of the sending node. Analog information is carried in the temporal structure of the inter-pulse intervals.
The ‘I&F' neuron circuit is shown in Fig. 4.9. The circuit comprises a source follower Q1-Q2, used to control the spiking threshold voltage; an inverter with positive feedback Q3-Q7, for reducing the circuit's power consumption; an inverter with controllable slew-rate Q8-Q11, for setting arbitrary refractory periods; a digital inverter Q13-Q14, for generating digital pulses; a current-mirror integrator Q15-Q19, for spike frequency adaptation, and a minimum size transistor Q20 for setting a leak current.
The input current Iinj is integrated linearly by Cmem onto Vmem. The source-follower Q1-Q2, produces Vin = K(Vmem - Vsf ), where Vsf is a constant sub-threshold bias voltage and ‘K' is the sub-threshold slope coefficient. As Vmem increases and Vin approaches the threshold voltage of the first inverter, the feedback current Ifb starts to flow, increasing Vmem and Vin more rapidly. The positive feedback has the effect of making the inverter Q3-Q5 switch very rapidly, reducing dramatically its power dissipation. A spike is emitted when Vmem is sufficiently high to make the first inverter switch, driving Vspk and Vo2 to Vdd.. During the spike emission period (for as long as Vspk is high), a current with amplitude set by Vadap is sourced into the gate-to-source parasitic capacitance of Q19 on node Vca. Thus, the voltage Vca increases with every spike, and slowly leaks to zero through leakage currents when there is no spiking activity. As Vca increases, a negative adaptation current Iadap exponentially proportional to Vca is subtracted from the input and the spiking frequency of the neuron is reduced over time. Simultaneously, during the spike emission period, Vo2 is high, the reset transistor Q12 is fully open, and Cmem is discharged, bringing Vmem rapidly to Gnd. As Vmem (and Vin) go to ground, Vo1 goes back to Vdd turning Q10 fully on. The voltage Vo2 is then discharged through the path Q10-Q11, at a rate set by Vrfr (and by the parasitic capacitance on node Vo2). As long as Vo2 is sufficiently high, Vmem is clamped to ground. During this “refractory” period, the neuron cannot spike, as all the input current Iinj is absorbed by Q12.
The adaptation mechanism implemented by the circuit is inspired by models of its neurophysiological counterpart: the voltage Vca, functionally equivalent to the calcium concentration [Ca2+] in a real neuron, is increased with every spike and decays exponentially to its resting value; if the dynamics of Vca is slow compared to the inter-spike intervals then the effective adaptation current is directly proportional to the spiking rate computed in some temporal window. The results for the above circuit are shown in the following figure.
The spike train generated at 15.943mA has spike frequency of 1835Hz. The spikes are seen to be generated with initial higher amplitude and the amplitude decreases as the time passes. Also the frequency of the spike is the decreasing one with time. Eventually the spike disappears. The spike generation output at -17.3mA is shown in the figure 7.10. It can be clearly seen that no spike is generated at this current value. This is true for any value which is less than 6mA for this circuit and is defined by the threshold value for which the circuit is being designed.
The results of running of the MATLAB code for spike generation are shown as in Appendix- A.
From Architecture Build Status:OK
From Neuron Build Status:OK
From Synapse Build Status:OK
From Adaptation Build Status:OK
From Simulation Build Status:OK
From Input Output Build Status:OK
From BNN Build Status:OK
The figure form of the result is:
TABLE 4.1: Spike generation result on MATLAB
(Current time)
0.58002 1.6786 2.7772 3.8759 4.9745 5.5775 6.0731 7.1717 7.8099 8.2702 9.3688 9.4363 10.4675 10.8042 11.566 12.0202 12.6644 13.1383 13.7629 | 14.1726 14.8616 15.1334 15.9602 16.0569 16.9318 17.0586 17.7679 18.1571 18.5708 19.2557 19.3483 20.1215 20.3541 20.8539 21.4527 21.5766 22.2958 22.5512 | 22.9827 23.6498 23.6646 24.3366 24.7484 24.986 25.6472 25.8469 26.2764 26.9155 26.9455 27.5355 28.0441 28.1491 28.7601 29.1428 29.3563 29.9587 30 |
In this chapter we analyze our designs made and explained in the previous chapter with the theoretical data, theorems and postulates available with us. Thus finally we verify that the designs we have worked for are functionally and theoretically compatible.
The photoreceptors (PR), Horizontal Cell (HC) and bipolar cell (BC) cell block The outputs of the multimeters are taken both in voltage and in current and they give the following results:
The outputs show a regular variation for the test condition of black and white pixel assumption (which is apt for rods) as it is around 3 to 3.7V at the pixel where the illumination is assumed and around -.5V where the absence of light is assumed. The average of the current is also correctly calculated at the resistance mesh and is appropriately subtracted from each pixel output so as to have over all scene brightness perception. The current outputs are of particular interest for it makes the input to the next stage. The current outputs at each pixel of this circuit are:
The output at the bright pixel is around 18-19mA and that for the dark pixel is around 6-7mA. Although all these outputs are at higher end of the voltage and current values when compared to real situation in eye, we have taken them because of the ease in monitoring. (*the outputs taken here are in case when we use 5V for bright pixel and 0V for dark pixel and the resistive mesh takes average of just 5 pixels)
The output has shown a regular variation of current and voltage with respect to time. The test conduction of black and white pixel assumption as it is 1.22 V to 11.10V at the pixel where the illumination is as around -11.10V where light is absence. Although all these output are at higher end of the voltage and current value when compared to the real situation in eye, we have taken them because of the ease in monitoring point to be noted that the outputs taken here are in case when we used 12V for bright pixel and zero and negative voltage for dark pixel. The transistor mesh takes just the average value of 5 pixels. The current outputs are of particular interest for making it the input to the next stage. The current output of each pixels of transistor mesh is constant variation with respect to time. The output of bright pixels is around 16.539mA to 25.273mA and for the dark pixels -25.277mA.
The DC transfer characteristic at different nodes of importance is shown here:
$1 - output of a pixel with bright input (2V)
$21 - the input node of bright pixel
$29 - the output node of same bright pixel
$22 - the resistive mesh node
$4 - the output node of dark pixel
$20 - the input node of dark pixel
The spike generation circuit is implemented with an inherent frequency adaptation mechanism. This ensures that the frequency is changed for different values of the current input to the circuit. Also the frequency at certain current input to the circuit, changes with time till the spikes perish. We have plotted a graph between the current value input and the initial frequency of the spike generated, which is shown in the figure 5.6.
(Spike frequency 2000 Hz)(Spike frequency 2500 Hz)
The figure A and B show the shape of the spike train generated at different current values 17mA in fig. A and 35mA in fig. B. The above results show the operation of the circuit at different values of the input current and it is observed that not only the frequency of the spike train generated varies with the variation in current value; the number of the spikes generated also varies. For the case of two levels of illumination as in our project, there is no spike generated at the dark level and spikes are generated at the frequency of 1835 Hz for the bright level.
The shape of the spike is also important in the case of the neuronal communication. The spike train must have regular shaped spikes, the typical theoretical shape of the spike is explained in chapter 2. The shape of the spike generated with the use of our spike generation model is shown in figure 5.6(B). It takes around 19ms for the circuit to respond to the inputs and the duration of the spike is also 0.07ms. The time taken for spike generation is explained by the minimum perception time of the eye. Thus the analysis of the circuits with the required possibilities is done in this chapter the results are validated with the theoretical ones.
The neuromorphic retina is designed and simulated successfully in Multisim 8.0. Also the code for spike generation is written and run in MATLAB successfully. Considering the goal of the project that was to design and simulate neuromorphic retina, has been completely achieved. The outputs that we are getting are same or almost near to the theoretical outputs. The ‘integrate and fire neuron' that is the spike generation circuit which we have implemented is most advanced circuit. We have also incorporated the frequency adaptation in the circuit.
If the circuitry with some improved features may be implemented on the hardware and the hardware may be made biocompatible then it will prove to be most beneficial for the ageing people and people suffering from retinitis pigmentosa. They will be able to see again if not fully, at least partially
We have tried our best and left no stone unturned in the project. We have tried to design and simulate the circuits that were available and implementable with the constraints of the simulator and the tools available. Although there is still immense scope available for being harnessed. The circuits can be certainly improved upon.
One of the limitations of the circuits that are implemented in this project is that the outer retina part that is the photoreceptor, horizontal and bipolar cell part of the circuit is implemented by the use of op-amps that internally have much more circuitry that is there for optimization. Although the outputs with the use of op-amps are good and accurate, but there size disadvantage to be considered as the circuits are intended to be implanted or used as external device with the biological system. This kind of usage is shown in the figure 6.1 and 6.2.
The circuit for the neuromorphic retina may be improved upon by application of layered approach in which the photoreceptors which must occupy maximum surface area so that they may gather maximum light effectively. The second layer may contain the horizontal and bipolar cell structures and the third layer may have the ganglion cells. A second possible approach may be the use of advanced CMOS technology and application of CMOS made amplifiers and differencing circuitry in place of op-amps so as to decrease the surface area of the neuromorphic retina. Another possible improvement in this project is in the implementation of resistive mesh. Although we have implemented it with CMOS but it is required the configuration must be triangular type so as to have better averaging property. The Multisim 8.0 does not support such kind of alignment of components. So if the alignment can be obtained in the higher versions or it may be done in some other way in the same software then it will surely improve the averaging block of the circuit.
Lastly, we have done the design and simulation part only in the present project, so it can be taken further to hardware implementation and thus testing the real timings and size of each block. Also if the system is implanted in part and some part is kept external to the then a telemetry system is to be designed that will enable communication between the two parts.
Modelling of meromorphic retina. (2017, Jun 26).
Retrieved November 21, 2024 , from
https://studydriver.com/modelling-of-meromorphic-retina/
A professional writer will make a clear, mistake-free paper for you!
Get help with your assignmentPlease check your inbox
Hi!
I'm Amy :)
I can help you save hours on your homework. Let's start by finding a writer.
Find Writer