---THE SOURCE OF MEDICAL IMAGING PROCEDURES AND TECHNOLOGY created by Sumarsono.Rad.Tech,S.Si----

COMPUTERS FUNDAMENTALS AND APPLICATIONS IN MEDICAL IMAGING

COMPUTERS FUNDAMENTALS AND APPLICATIONS IN MEDICAL IMAGING
By ; Sumarsono


The Progressive and evolutionary growth in medicine would not be possible without the aid of computers. As a result of the applications of the computer in the storage, analysis, and manipulation of data, pathologic conditions can be diagnosed more accurately and earlier in the disease process, resulting in an increased patient cure rate. The increasing use of computers in medical science clearly demonstrates the need for qualified personnel who can understand and operate computerized equipment.



History of computer
History of computer could be traced back to the effort of man to count large numbers. This process of counting of large numbers generated various systems of numeration like Babylonian system of numeration, Greek system of numeration, Roman system of numeration and Indian system of numeration. Out of these the Indian system of numeration has been accepted universally. It is the basis of modern decimal system of numeration (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). Later you will know how the computer solves all calculations based on decimal system. But you will be surprised to know that the computer does not understand the decimal system and uses binary system of numeration for processing.
1. Calculating Machines
It took over generations for early man to build mechanical devices for counting large numbers. The first calculating device called ABACUS was developed by the Egyptian and Chinese people. The word ABACUS means calculating board. It consisted of sticks in horizontal positions on which were inserted sets of pebbles. A modern form of ABACUS is given in Fig. 1.2. It has a number of horizontal bars each having ten beads. Horizontal bars represent units, tens, hundreds, etc.
2. Napier’s bones
English mathematician John Napier built a mechanical device for the purpose of multiplication in 1617 A D. The device was known as Napier’s bones.
3. Slide Rule
English mathematician Edmund Gunter developed the slide rule. This machine could perform operations like addition,. subtraction, multiplication, and division. It was widely used in Europe in 16th century.
4. Pascal's Adding and Subtractory Machine
You might have heard the name of Blaise Pascal. He developed a machine at the age of 19 that could add and subtract. The machine consisted of wheels, gears and cylinders.
5. Leibniz’s Multiplication and Dividing Machine
The German philosopher and mathematician Gottfried Leibniz built around 1673 a mechanical device that could both multiply and divide.
6. Babbage’s Analytical Engine
It was in the year 1823 that a famous English man Charles Babbage built a mechanical machine to do complex mathematical calculations. It was called difference engine. Later he developed a general-purpose calculating machine called analytical engine. Charles Babbage is called the father of computer.
7. Mechanical and Electrical Calculator
In the beginning of 19th century. The mechanical calculator was developed to perform all sorts of mathematical calculations. Up to the 1960s it was widely used. Later the rotating part of mechanical calculator was replaced by electric motor. So it was called the electrical calculator.

8. Modern Electronic Calculator
The electronic calculator used in 1960 s was run with electron tubes, which was quite bulky. Later it was replaced with transistors and as a result the size of calculators became too small. The modern electronic calculator can compute all kinds of mathematical computations and mathematical functions. It can also be used to store some data permanently. Some calculators have in-built programs to perform some complicated calculations.
Computers that used vacuum tubess as their electronic elements were in use throughout the 1950s. Vacuum tube electronics were largely replaced in the 1960s by transistor-based electronics, which are smaller, faster, cheaper to produce, require less power, and are more reliable. In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the 1980s, computers became sufficiently small and cheap to replace simple mechanical controls in domestic appliances such as washing machines. The 1980s also witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.
Functional Components of a Computer
The term hardware is used to describe the functional equipment components of a computer and is everything concerning the computer that is visible. Software designates the parts of the computer system that are invisible, such as the machine language and the programs. A computer program may run from just a few instructions to many millions of instructions. A typical modern computer can execute billions of instructions per second (gigahertz or GHz) and rarely make a mistake over many years of operation. Large computer programs comprising several million instructions may take teams of programmers years to write, thus the probability of the entire program having been written without error is highly unlikely.
The computer hardware consist of four functionally independent components : the arithmetic and logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by busses, often made of groups of wires.
Input devices is the process of entering data and programs in to the computer system. Computer is an electronic machine like any other machine which takes as inputs raw data and performs some processing giving out processed data. Therefore, the input unit takes data from user to the computer in an organized manner for processing.

The control unit (often called a control system or central controller) directs the various components of a computer. It reads and interprets (decodes) instructions in the program one by one. The control system decodes each instruction and turns it into a series of control signals that operate the other parts of the computer.Control systems in advanced computers may change the order of some instructions so as to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from. The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:
1. Read the code for the next instruction from the cell indicated by the program counter.
2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register or perhaps an output device.
8. Jump back to step (1).

The process of saving data and instructions permanently is known as storage. Data has to be fed into the system before the actual processing starts. It is because the processing speed of Central Processing Unit (CPU) is so fast that the data has to be provided to CPU with the same speed. Therefore the data is first stored in the storage unit for faster access and processing. This storage unit or the primary storage of the computer system is designed to do the above functionality. It provides space for storing data and instructions.
Computer main memory comes in two principal varieties: random access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM is erased when the power to the computer is turned off while ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the software required to perform the task may be stored in ROM. Software that is stored in ROM is often called firmware because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM by retaining data when turned off but being rewritable like RAM. However, flash memory is typically much slower than conventional ROM and RAM so its use is restricted to applications where high speeds are not required.In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.
The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers—albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other .
Output: This is the process of producing results from the data for getting useful information. Similarly the output produced by the computer after processing must also be kept somewhere inside the computer before being given to in human readable form. Again the output is also stored inside the computer for further processing.
Binary System
In computer’s memory both programs and data are stored in the binary form. The binary system has only two values 0 and 1. As human beings we all understand decimal system but the computer can only understand binary system. It is because a large number of integrated circuits inside the computer can be considered as switches, which can be made ON, or OFF. If a switch is ON it is considered 1 and if it is OFF it is 0. A number of switches in different states will give a message like : 110101....10. So the computer takes input in the form of 0 and 1 and gives output in the form 0 and 1 only. Every number in binary system can be converted to decimal system and vice versa; for example, 1010 meaning decimal 10. Therefore it is the computer that takes information or data in decimal form from user, convert it in to binary form, process it producing output in binary form and again convert the output to decimal form.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers; either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory as long as it can be somehow represented in numerical form. Modern computers have billions or even trillions of bytes of memory.
Applications in Medical Imaging
Analog to Digital Converters

Devices that produce images of real objects, such as patients, produce an analog rather than digital signal. The electrical signal being emitted from the output phosphor of an image intensifier on a fluoroscopic unit, the scintillation crystal of a nuclear medicine detector of a computed tomography (CT) unit, or the piezoelectric crystal of an ultrasound machine is in analog form with a variance in voltage. For these signals to be read as data by the computer, they must by digitized an converted into binary numbers system. The peripheral device that performs this task is the analog-to-digital converter, which transforms the sine wave into discrete increments with binary numbers assigned to each increment. The assignment of the binary numbers depends on output voltage, which in turn represents the degree of attenuation of the various tissue densities within the patient. The basic component of an analog digital converter (ADC) is the “comparator” that outputs a “1” when the voltage equals or exceeds a precise analog voltage and a “0” if the voltage does not equal or exceed this predetermined level. The most significant parameters of an ADC are (1) digitization depth (the number of bits in the resultant binary number); (2) dynamic range (the range of voltage or input signals the result in a digital output); (3) digitization rate (the rate that digitization takes place). Achieving the optimal digitization depth is necessary for resolution quality and flexibility in image manipulation. Digitization depth and dynamic range are analogous and latitude in a radiograph.

To produce a video image, the field size of the image is divided into many cubes or matrix, with each cube assigned a binary number proportional to the degree of the attenuation of the x-ray beam or intensity of the incoming signal. The individual three-dimensional cubes with length, width, and depth are called voxel (volume element), with the degree of attenuation or intensity of the incoming voltage determining their composition and thickness.

Because the technology for displaying three-dimensional objects has not been fully developed, a two-dimensional square or pixel (picture element) represent the voxel on the television display monitor or cathode ray tube. The matrix is an array of pixel arranged in two dimension, length and width, or in row and columns. The more pixel contained in a image, the larger matrix becomes, with the resolution quality of the image improving. For instance, a matrix containing 256 x 256 pixels has atotal of 65,536 pixel or pieces of data; whereas a matrix of 512 x 512 pixels contains 262,144 pieces of data. One should not confuse field size with matrix size. The larger matrix also allows for more manipulation of the data or the image displayed on the television monitor and is very beneficial and useful in the imaging modalities, such as digital substraction fluoroscopy, CT, nuclear medicine, and ultrasound.

DIGITAL IMAGING PROCESSING

Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subfield of digital signal processing, digital image processing has many advantages over analog image processing; it allows a much wider range of algorithms to be applied to the input data, and can avoid problems such as the build-up of noise and signal distortion during processing

Within the computer system, digital images are represented as groups of numbers. Therefore these numbers can be changed through applying mathematic operations, and the result the images is altered, this important concept has provide extraordinary control in contrast enhancement, image enhancement, subtraction techniques, and magnification without losing the original image data.
Contrast Enhancement
Window width encompasses the range densities within an image. A narrow window is comparable to the use of a high-contrast radiographic film. As a result, image contrast is increased. Increasing the width of the window allows more of the gray scale to be visualized or more latitude in the densities of the image visualized. A narrow window is valuable when subtle differences in subject density need to be better visualized. However, the use of a narrow window increases image noise, and densities outside of the narrow window are not visualized.

Image Enhancement or Reconstruction
Image enhancement or reconstruction is accomplished by the use of digital processing of filtering, which can be defined as accenting or attenuating selected frequencies in the image. The filtration methods used in the medical imaging are classified as (1) convolution; (2) low-pass-filtering or smoothing; (3) band-pass filtering; and (4) high-pass filtering, or edge’ enhancement. The background intensities within a medical image consist mainly of low spatial frequencies, whereas an edge, typifying a sudden change in intensities, is composed of mainly high spatial frequencies for example, the bone-to-air interface or the skull when using computed tomography. Spatial noise originating from within the computer system is usually high spatial frequencies. Filtration reduces the amount of high spatial frequencies inherent in the object. The percentage of transmission versus spatial frequency, if ploted on a graph, called modulation transfer function (MTF). One effort of continuing research in medical imaging is a develop systems with higher modulation transfer function.

Convolution is accomplished automatically with computer system that are equipped with fast fourier transforms, the convolutions process is implemented by placing a filter mask array, or matrix, over the image array, or matrix, in the memory. A filter mask is a square array or section with numbers, usually consisting of an area of 3 x 3 elements. The size of the filter mask is determined by the manufacturer of the equipment, although larger mask are not often used because they take longer to process. The convolution filtering process can be conceptualized as placing the filter mask over an area of the image marix, multipliying each element value direcly beneath it, obtaining a sum of these values, and then placing that sum within the output image in the exact location as it was in the original image.

Subtraction Technique
The advantages of using digital substraction include the ability to visualize small anatomic structures and to perform the examination via venous injection of contras media. The most common digital subtraction technique are temporal subtraction and dual energy subtraction. Hybrid subtraction is a combination of these two methods.

Magnification
Magnification, sometimes called zooming, is a process of selecting an area of interest and copying each pixel within the area an integer number of times. Large magnifications may give the image an appearance of being constructed of blocks. To provide a more diagnostic image, a smoothing or low-pass filter operation can be done to smooth out the distinct intensities between the blocks.

Three-Dimensional Image
When three dimensional imaging was first introduced, the images were less than optimal because of the resolution being too low to adequately visualize anatomic structures deeper within the body. The images often adequately displayed only the more dense structures closer to the body surface, or surface boundaries, which appeared blocky and jagged; therefore, the soft tissue or less dense structures were not visualized. By using fast Fourier transforms (3DFT), new algorithms for mathematical calculations, and development of computers with faster processing time, three dimensional images have become smooth, sharply focused, and realistically shaded to demonstrate soft tissue. The ability to demonstrate soft tissue in three dimensional imaging is referred to as a volumetric rendering technique.

Volumetric rendering is a computer program whereby “stack” of sequential images are processed as a volume with the gray scale intensity information in each pixel being interpolated in the z axis (perpendicular to to the x and y axes). Interpolated is necessary because the field of view of the scan (the x and y axes). is not the same as the z axis because of interscan spacing. Following this computer process, new data are generated by interpolation, resulting in each new voxel having all the same dimension. The volumetric rendering technique enables definitions of the object’s thickness, a crucial factor in three dimensional imaging and in visualizing subtle densities.








No comments: