Computed tomography (CT) Scan also called computerized axial tomography (CAT) scan or body section röntgenography is the process of the creating a cross-sectional tomography plane (slice) of any part of the body. The word "tomography" is derived from the Greek tomos (slice) and graphein (image). A patient is scanned by an x-ray tube rotating about the body part being examined. A detector assembly detects the radiation The image which is reconstructed by a computer using x-ray absorption measurement collected al multiple points about the periphery of the part being scanned.
Today, CT is a well-accepted imaging modality for many body applications, since CT imaging often provide a great deal of unique diagnostic information. CT is used for a wide variety of neurologic and somatic procedures. CT provides diagnostic information that cannot be achieved with any other method. The most common procedures involve the head (e.g., brain, skull, sinuses, facial bones, orbits, IACs and sella tursica),chest, abdomen and pelvic (e.g., liver, gallbladder, pancreas, spleen,kidney,adrenal glands, intestines, reproductive organs). Computed tomography is used to detect abnormalities such as blood clots, cysts, fractures, infections, and tumors in internal structures (e.g., bones, muscles, organs, soft tissue). It also can be used to detect abnormalities in the neck and spine (e.g., vertebrae, intervertebral discs, spinal cord) and in nerves, blood vessels upper and lower extremities.
The procedure also may be used to guide the placement of instruments within the body (e.g., to perform a biopsy) and drainage of fluid collections offer an alternative to surgery for some patients. Although the procedures are considered invasive, they offer shorter recovery periods, no exposure to anesthesia, and less risk of infection. CT is also used in radiaio oncology for radiation therapy treatment planning. CT Scan taken through the treatment field, with the patient in treatment position, have drastically improved the accuracy and quality of therapy provided.
The amount of radiation used in a CT scan is low, and the procedure is considered to be safe. However, CT scans should be used with caution in women who are pregnant, especially during the first trimester. Other diagnostic tests (e.g., ultrasound) may be used during pregnancy.
Comparison with conventional radiography
Reviewing conventional radiography helps explain the uniquensess of CT diagnostic information. When a conventional x-ray exposure is made, the transmitted radiation passes through the patien and is detected by x-ray film or an image-intensifer phosphor. First, for each exposure to radiation, one diagnostic image with a fixed density and contrast is produced. Second, all body structures are superimposed on one sheet of x-ray film. Thus, the highlighting of certain anatomy requires exact positioning of the patient. Often the use of contrast agents, and frequently more than one exposure.
Low tissue density that would normally be abscured by higher-density anatomy on a conventional radiograph can be clearly visualized with CT. for this reason CT is valuable in neurologic work in which the brain is surrounded by the skull. Like wise, in many body examinations. Low tissue density that would otherwise be hidden or blend with surrounding anatomy can be clearly visualized.
Although it seems obvious, it should also be noted that the CT image displays the entire cross section of the slice of anatomy that was scanned. Thus the size and location of any pathologic condition can be determined with extreme accuracy within a given CT slice. With conventional radiography, multiple exposures and contrast media are often required to estimate the size and location of the diseased area.
Contrast of Image : CT measures and can reveal significantly more minute differences in x-ray attenuation than can be recorded by conventional radiography. For example, conventional radiography requires a minimum difference in tissue of a 2% to 5% to radiographyically separate the structures. CT can resolve differences in tissue density as low as 0.5%. in Figure below the gray and the white matter in the brain can be distinguished easily.
Image manipulation : In conventional radiography, only a single radiography with a fixed contrast and density is obtained for each patient exposure to radiation. Once the film has been processed, the patient must be exposed to radiation again to produce another image. The CT image, on the other hand, is the result of complex mathematical calculation that the computer performs to reconstruct an image which is stored in the computer’s memory. The CT image is displayed on the monitor and can be altered in many ways.
HISTORICAL DEVELOPMENT
In the early 1900s, the Italian radiologist Alessandro Vallebona proposed a method to represent a single slice of the body on the radiographic film. This method was known as tomography. The idea is based on simple principles of projective geometry: moving synchronously and in opposite directions the X-ray tube and the film, which are connected together by a rod whose pivot point is the focus; the image created by the points on the focal plane appears sharper, while the images of the other points annihilate as noise. This is only marginally effective, as blurring occurs only in the "x" plane. There are also more complex devices which can move in more than one plane and perform more effective blurring.
The first successful clinical demonstration of CT was conducted in 1970 by Godfrey Newbold Hounsfield from the Central Research Laboratory of EMI, Ltd and Dr.James Ambrose, a physician at Atkinson Morley’s Hospital in London , England are generally given credit for development of CT. In 1971 the first full-scale unit for head scanning was installed at Atkinson Morley’s Hospital, Wimbledon, England. Its value for providing neurologic information enabled it to again rapid acceptance.
The first CT units in the United States were installed in 1973 at the Mayo Clinic and Massachusets General Hospital. In 1974, Dr. Robert Ledley at Georgetown University Medical Center developed the first scanner capable of visualizing any section of the body (whole body scanner) which greatly expanded the diagnostic capabilities of CT.
TECHNICAL ASPECTS
To obtain one axial image, a series of steps is performed by the computer. The tube rotates about the patient, radiating the area of interest. The detector measure the remnant radiation, translate it into an attenuation coefficient, and relay it to the computer. When the computer receives the data from detector, it creates a CT number based on the average intensity of the remnant radiation
CT numbers are also termed Hounsfield units (HU). CT numbers or Hounsfield units ( HU in honor of the inventor Godfrey Newbold Hounsfield) are defined as relative comparison of x-ray attenuation of each voxel of tissue with an equal volume of water. CT numbers or HU varies proportionately with tissue density ( high CT number indicate dense tissue, low CT number indicate less dense tissue).
In general, they are related to the attenuation coefficient of water (µw) as follow :
HU = (µ- µw) x 1000 x 1/ µw
Table 1. Sample CT numbers for various tissues.
Tissue CT number (HU)
Metal +2000 to +4000
Bone +1000
Liver +40 to +60
Aorta +35 to +50
White matter ~+20 to +30 HU
Grey matter ~+37 to +45 HU
Tumor +25 to +100
Blood ( Fluid) +25 to +50
Blood (clotted) +50 to +75
Blood (old) +10 to +15
Muscle +10 to +40
Kidney +30
Cerebrospinal fluid +15
Gall Bladder +5 to +30
Cyst -5 to +10
Water 0
Orbits -25
Fat -50 to -100
Air -1000
In accordance with this system, lesions whose attenuation values are close to that of water are consistent with, but not specific for, cysts. Lesions composed solely or predominantly of fat produce negative CT numbers; however, some types of liposarcoma contain great amounts of fat, and some forms of lipoma reveal abundant nonfatty tissue. haematomas characteristically demonstrate inhomogeneous areas with regions of both high attenuation (approximately 50 HU) and low attenuation (approximately 10 HU) in the subacute stage and homogeneous areas of low attenuation (120 HU) in the chronic stage. The measurement of attenuation values of bone lesions may be more difficult, especially in narrow bones in which the contribution of the cortex may prohibit accurate assessment.
The identification of gas in soft tissue or bone by CT is possible owing to its very low attenuation value. Gas within a vertebral body documented by CT, for example, is an important sign of ischaemic necrosis of bone. Intraosseous gas is also identified in some cases of osteomyelitis and in subchondral cysts (pneumatocysts), particularly in the ilium and vertebral body.
SCANNER COMPONENTS
The major components of a CT scanner are the computer and operator console, the gantry, and the table. Scanner will have slight variations in design and appearance according to manufactures.
The gantry houses the x-ray tube, data acquisition system (DAS; part of the detector assembly that converts analog signals to digital signals ttaht can be used by the CT computer), and detector for radiation production and detection. Every gantry has an opening, or aperture, to accommodate most patients. The Gantry can be tilted in either direction.
The table is an automated device linked to the computer and gantry. CT tables are made of either wood or low-density carbon composite, both of which will support the patients without causing image artifacts
The operator console is the point from which the operator controls the scanner. In this area operator (radiographer) can adjust the examination protocols, adjust the image by changing the width or center (level) of the window.
The Procedures of CT Scan Examination
Before undergoing a CT scan, patients must remove all metallic materials (e.g., jewelry, clothing with snaps, zippers) and may be required to change into a hospital gown that will not interfere with the x-ray images. Patients lie on a movable table, which is slipped into a doughnut-shaped computed tomography scanner.
To provide clear images, patients must remain as still as possible during CT scan. At certain points during a CT scan of the chest or abdomen, the radiographers may ask the patient not to breathe for a few seconds. CT scans can be performed on an outpatient basis, unless they are part of a patient's inpatient care. Although each facility may have specific protocols in place, generally, CT scans follow this process:
1. When the patient arrives for the CT scan, he/she will be asked to remove any clothing, jewelry, or other objects that may interfere with the scan.
2. If the patient will be having a procedure done with contrast, an intravenous (IV) line will be started in the hand or arm for injection of the contrast medication. For oral contrast, the patient will be given medication to swallow.
3. The patient will lie on a scan table that slides into the gantry
4. As the scanner begins to rotate around the patient, x-rays will pass through the body for short amounts of time.
5. A detector assembly detect the x-rays exiting the patient and feeds back the information, referred to as raw data to the host computer.
6. The computer will transform the information into an image to be interpreted by the radiologist.
CONTRAST AGENT
A contrast agent (e.g., iodine-based dye, barium solution) may be administered prior to CT scan to allow organs and structures to be seen more easily. Contrast agents can be administered through a vein (IV), by injection, or taken orally. Patients usually are instructed not to eat or drink for a few hours prior to contrast injection or IV because the dye may cause stomach upset. Patients may be required to drink an oral contrast solution 1–2 hours before CT scan of the abdomen or pelvis.
Contrast dye may cause a rash, itching, or a feeling of warmth throughout the body. Usually, these side effects are brief and resolve without treatment. Antihistamines may be administered to help relieve symptoms.
A severe anaphylactic reaction (e.g., hives, difficulty breathing) to the contrast dye may occur. This reaction, which is rare, is life threatening and requires immediate treatment. Patients with a prior allergic reaction to contrast dye or medication and patients who have asthma, emphysema, or heart disease are at increased risk for anaphylactic reaction. Epinephrine, corticosteroids, and antihistamines are used to treat this condition. Rarely, contrast dye may cause kidney failure. Patients with diabetes, impaired kidney function, and patients who are dehydrated are at higher risk for kidney failure.
Advances in computed tomography technology
Advances in computed tomography technology include the following:
• high-resolution computed tomography
This type of CT scan uses very thin slices (less than one-tenth of an inch), which are effective in providing greater detail in certain conditions such as lung disease.
• helical or spiral computed tomography
During this type of CT scan, both the patient and the x-ray beam move continuously, with the x-ray beam circling the patient. The images are obtained much more quickly than with standard CT scans. The resulting images have greater resolution and contrast, thus providing more detailed information.
• ultrafast computed tomography (also called electron beam computed tomography)
This type of CT scan produces images very rapidly, thus creating a type of "movie" of moving parts of the body, such as the chambers and valves of the heart. This scan may be used to obtain information about calcium build-up inside the coronary arteries of the heart.
• Multidetector computed tomography: Multidetector computed tomography(MDCT) is also known by a confusing array of other terms such as multidetector CT, multidetector-row computed tomography, multidetector-row CT, multisection CT, multislice computed tomography, and multislice CT MSCT).
In MDCT or MSCT, a two-dimensional array of detector elements replaces the linear array of detector elements used in typical conventional and helical CT scanners. The two-dimensional detector array permits CT scanners to acquire multiple slices or sections simultaneously and greatly increase the speed of CT image acquisition. Image reconstruction in MDCT or MSCT is more complicated than that in single section CT. Nonetheless, the development of MDCT has resulted in the development of high resolution CT applications such as CT angiography and CT colonoscopy. .
• Combined computed tomography and positron emission tomography (PET/CT)
The combination of computed tomography and positron emission tomography technologies into a single machine is referred to as PET/CT. PET/CT combines the ability of CT to provide detailed anatomy with the ability of PET to show cell function and metabolism to offer greater accuracy in the diagnosis and treatment of certain types of diseases, particularly cancer. PET/CT may also be used to evaluate epilepsy, Alzheimer's disease, and coronary artery disease.
Patient Radiation Doses
The various factor affecting patient dose are;patient thickness, generator and tube factors (kilovoltage, filtration, tube current, scan on time and focal-spot size), gantry factors (beam collimation, slice width and overlap, scan orientation, and detector efficiency), and image quality desired.
The main issue within radiology today is how to reduce the radiation dose during CT examinations without compromising the image quality. Generally, a high radiation dose results in high-quality images. A lower dose leads to increased image noise and results in unsharp images. Unfortunately, as the radiation dose increases, so does the associated risk of radiation induced cancer - even though this is extremely small. A radiation exposure of around 1200 mrem (similar to a 4-view mammogram) carried a radiation-induced cancer risk of about a million to one. However, there are several methods that can be used in order to lower the exposure to ionizing radiation during a CT scan.
1. New software technology can significantly reduce the radiation dose. The software works as a filter that reduces random noise and enhances structures. In this way, it is possible to get high-quality images and at the same time lower the dose by as much as 30 to 70 percent.
2. Individualize the examination and adjust the radiation dose to the body type and body organ examined. Different body types and organs require different amounts of radiation.
3. Prior to every CT examination, evaluate the appropriateness of the exam whether it is motivated or if another type of examination is more suitable.
COMPUTED TOMOGRAPHY (CT SCAN)
NUCLEAR MEDICINE
Nuclear medicine is a branch of medical imaging that involves the use of radioactive isotopes in the diagnosis and treatment of disease. This imaging may also be referred to as radionuclide imaging or nuclear scintigraphy. The procedures use pharmaceuticals that have been labeled with radionuclides (radiopharmaceuticals). The radionuclides used in nuclear medicine are produced in nuclear reactors or particle accelerator (cyclotrons). In diagnosis, radiopharmaceuticals are administered to patients and the radiation emitted is measured using a gamma camera. The radiation from the radiopharmaceutical makes it possible to radiograph the distribution of the medicinal product throughout the body. The radiation is usually very low, lower than the level of radiation from X-ray investigations.
History
Nuclear medicine began as a medical specialty area for diagnosis and treatment of disease in the late 1950s and early 1960s.However, long before that in the early 1800s scientists such as Jhon Dalton and Amedeo Avogadro were proposing theories on atomic and molecular structure that would serve as the basis for later research and eventually the discovery of radioactivity by A.H.Becquerel in 1896.
Its origins stem from many scientific discoveries, most notably the discovery of x-rays in 1895 and the discovery of "artificial radioactivity" in 1934. The first clinical use of "artificial radioactivity" was carried out in 1937 for the treatment of a patient with leukemia at the University of California at Berkeley.
A landmark event for nuclear medicine occurred in 1946 when a thyroid cancer patient's treatment with radioactive iodine led to complete disappearance of the patient's cancer. This has been considered by some as the true beginning of nuclear medicine. Wide-spread clinical use of nuclear medicine, started in the early 1950s as its use increased to measure the function of the thyroid and to diagnose thyroid disease and for the treatment of patients with hyperthyroidism.
In the mid-sixties and the years that followed, the growth of nuclear medicine as a specialty discipline was phenomenal. The use of nuclear medicine as a specialty discipline began to see exciting growth with significant advances in nuclear medicine technology. The 1970s brought the visualisation of most other organs of the body with nuclear medicine, including liver and spleen scanning, brain tumour localisation, and studies of the gastrointestinal tract.The 1980s saw the use of radio-pharmaceuticals for such critical diagnoses as heart disease and the development of digital computers to add additional power to the technique.
Today, very complex imaging and computer systems are used with these different radioactive components, not only to image and threat disease, but also to provide functional and quantitative analysis of many body system. Nuclear medicine has found a unique niche in the medical imaging field by virtue of its functional imaging capacity.
Physical Principles of Nuclear Medicine
The atomic number describes the number of protons in the nucleus. For a neutral atom this is also the number of electrons outside the nucleus. Subtracting the atomic number from the atomic mass number gives the number of neutrons in the nucleus.
Isotopes are atoms of the same element (i.e., they have the same number of protons, or the same atomic number) which have a different number of neutrons in the nucleus. Isotopes of an element have similar chemical properties. Radioactive isotopes are called radioisotopes. Most of the elements in the periodic table have several isotopes, found in varying proportions for any given element. The average atomic mass of an element takes into account the relative proportions of its isotopes found in nature.
A nuclear binding force holds the nucleus of the atom together. The nuclear mass defect, a slightly lower mass of the nucleus compared to the sum of the masses of its constituent matter, is due to the nuclear binding energy holding the nucleus together. The mass defect can be used to calculate the nuclear binding energy, with E = mc2. The average binding energy per nucleon is a measure of nuclear stability. The higher the average binding energy, the more stable the nucleus.
The Bohr model of the atom described the electrons as orbiting in discrete, precisely defined circular orbits. Electrons can only occupy certain allowed orbitals. For an electron to occupy an allowed orbit, a certain amount of energy must be available.Each orbit is assigned a quantum number, with the lowest quantum numbers being assigned to those orbitals closest to the nucleus. Only a specified maximum number of electrons can occupy an orbital. Under normal circumstances, electrons occupy the lowest energy level orbitals closest to the nucleus. By absorbing additional energy, electrons can be promoted to higher orbitals, and release that energy when they return back to lower energy levels.
Photons are used to describe the wave-particle duality of light. The energy of a photon depends upon its frequency. This helps to explain the photoelectric effect; only photons having a sufficiently high energy are capable of dislodging an electron from the illuminated surface. E = hv where E is the photon energy in J, v is the photon frequency in Hz, and h is Planck's constant, 6.626 x 10-34 J/Hz.Quantum theory offers a mathematical model to help explain the nature of the atom.Quantum theory describes a region surrounding the nucleus which has the highest probability of locating an electron. These orbital "clouds" have some unusual and interesting shapes.
Radioactive decay is the process in which an unstable atomic nucleus spontaneously loses energy by emitting ionizing particles and radiation. These emissions are collectively called ionizing radiations. Depending on how the nucleus loses this excess energy either a lower energy atom of the same form will result, or a completely different nucleus and atom can be formed. The most common types of radiation are called alpha, beta, and gamma radiation, but there are several other varieties of radioactive decay.Radioactive decay rates are normally stated in terms of their half-lives, and the half-life of a given nuclear species is related to its radiation risk. The different types of radioactivity lead to different decay paths which transmute the nuclei into other chemical elements. Examining the amounts of the decay products makes possible radioactive dating.
There are quite a few naturally occurring radionuclides. Any nuclide with an atomic number greater than 83 is radioactive. An atom's atomic number is simply the total number of protons found in the nucleus. There are also many naturally occurring radionuclides with lower atomic numbers.While some radionuclides occur naturally in the environment, there is another class of "man-made" or artificial radionuclides. Artificial radionuclides are generally produced in a cyclotron or some other particle accelerator, in which stable nucleus bombarded by specific particles (neutrons, protons, electrons or some combination of these). By doing so, the nucleus of starting material unstable, and this nucleus will then try to become stable by emitting radioactivity.
Nuclear Pharmacy (radiopharmaceuticals)
Nuclear Pharmacy involves the preparation of radioactive materials for use in nuclear medicine procedures. Radionuclides are combined with other chemical compounds or pharmaceuticals to form radiopharmaceuticals. Radiopharmaceuticals are administered to patients and the radiation emitted can localize to specific organs or cellular receptors. The external detectors (gamma cameras) capture and form images from the radiation emitted by the radiopharmaceuticals.
The concept of nuclear pharmacy was first described in 1960 by Captain William H. Briner while at the National Institutes of Health (NIH) in Bethesda, Maryland. Along with Mr. Briner, John E. Christian, who was a professor in the School of Pharmacy at Purdue University, had written articles and contributed in other ways to set the stage of nuclear pharmacy. William Briner started the NIH Radiopharmacy in 1958. He also brought about principles and procedures important to the assurance of quality radiopharmaceuticals. Christian developed the first formal lecture and laboratory courses in the United States for teaching the basic principles of radioisotope applications. John Christian and William Briner were both active on key national committees responsible for the development, regulation and utilization of radiopharmaceuticals.
In the mid 1970s a petition was formed requesting the formation of a Section on Nuclear Pharmacy in the Academy of General Practice, currently called the Academy of Pharmacy Practice and Management. On April 23, 1975, the petition was finally approved by the American Pharmacists Association (APhA) Board of Trustees. Nuclear pharmacy thus became a new area in pharmacy.
The most commonly used isotope in nuclear medicine is Technetium-99m that is readily and continuously available from a generator system. This generator system uses molybdenum-99 as the ‘parent.”Molybdenum-99 can be the product of either U-235 fission in a nuclear rector or neutron radiation of Mo-98 in reactor. Molybdenum-99 has a half life of 66.7 hours and decay (82%) to a daughter product known as metastable technetium (Tc 99m). The most commonly used radioisotope in nuclear medicine F-18, is not produced in any nuclear reactor, but rather in a circular acclererator called a cyclotron. The cyclotron is used to accelerate protons to bombard the stable heavy isotope of oxygen O-18. The O-18 constitutes about 0.20% of ordinary oxygen (mostly O-16), from which it is extracted. A typical nuclear medicine study involves administration of a radionuclide into the body by intravenous injection in liquid or aggregate form, ingestion while combined with food, inhalation as a gas or aerosol, or rarely, injection of a radionuclide that has undergone micro-encapsulation. Some studies require the labeling of a patient's own blood cells with a radionuclide (leukocyte scintigraphy and red blood cell scintigraphy). Most diagnostic radionuclides emit gamma rays, while the cell-damaging properties of beta particles are used in therapeutic applications. Refined radionuclides for use in nuclear medicine are derived from fission or fusion processes in nuclear reactors, which produce radioisotopes with longer half-lives, or cyclotrons, which produce radioisotopes with shorter half-lives, or take advantage of natural decay processes in dedicated generators, i.e. molybdenum/technetium or strontium/rubidium.
The most commonly used intravenous radionuclides are: Technetium-99m, Iodine-123 and 131, Gallium-67, Thallium-201, Fluorine-18 Fluorodeoxyglucose, and Indium-111 Labeled Leukocytes. The most commonly used gaseous/aerosol radionuclides are: Krypton-81m, Xenon-133, Technetium-99m Technegas, and Technetium-99m DTPA
The generator forms the radionuclide that is retained on an internal column until the generator is "milked". When "milking" the generator, sodium chloride is passed over the column, which removes the radioactive material. The eluate is then collected in a shielded evacuated vial. After performing quality assurance tests on the eluate, it can be used in the preparation of the final radiopharmaceutical products.
Clinical Nuclear Medicine
Nuclear medicine procedures are generally divided into three basic categories : in Vivo, in Vitro/radioimmunoassay (RIA) and radionuclide therapy procedures.
In Vivo Procedures
The term in vivo is defined as “ within the living body.” This category includes all diagnostic nuclear medicine imaging procedures. Since diagnostic imaging procedures are based on the distribution of radiopharmaceuticals”within the body,”they are classified as in vivoexaminations.
There are wide variety of in vivo/diagnostic imaging examination performed in nuclear medicine. These examination can be described based on the imaging method used : Static, whole body dynamic, Single Photon Emission Computed Tomography (SPECT), and Positrion Emission Tomography (PET).
In Vitro Procedures
In vitro is defined as “ withn a glass; observable n a test tube; in an artificial environment.” This category is use to describe those nuclear medicine examination that require an evaluation or analysis of radioactive samples taken from the human body. Results from these examination are usually a specific quantitative value rather than a diagnostic image.
Radioimmunoassay
Radioimmunoassay (RIA) procedures are performed on body samples such as whole blood, serum, spinal fluid, and urine. Spesific target structures or ligands, such as antibodies or metabolically active drugs, are labeled with a radioactive tracer to determine their levels. Examples radioimmunoassay include thyroid hormone values (T3-Triiodothyronine, T4-Thyroxine, or TSH-thyroid stimulating hormone), drug levels (digoxin, digitoxin, methyltrexate, theophylline, aminophylin, cyclosporine), and vitamins (Vitamin B12, folic acid). Level of these particular hormones, drugs, and vitamins are determined by counting these labeled samples in a specialized scintillation counter. These assay are very sensitive and specific and are used to determine minute levels (µG/dl) of a wide range of ligands.
Analysis
The end result of the nuclear medicine imaging process is a "dataset" comprising one or more images. In multi-image datasets the array of images may represent a time sequence (ie. cine or movie) often called a "dynamic" dataset, a cardiac gated time sequence, or a spatial sequence where the gamma-camera is moved relative to the patient. SPECT (single photon emission computed tomography) is the process by which images acquired from a rotating gamma-camera are reconstructed to produce an image of a "slice" through the patient at a particular position. A distribution of radionuclide in the patient.
Many of the procedures being performed in nuclear medicine department require some form of quantitative analysis, which provides physicians with numeric results based on function. Specialized software allows nuclear medicine computers to collect, process, and analysis functional information information obtained from nuclear medicine imaging system. Cardiac ejection fraction are one of the more common quantitative results provided from nuclear medicine procedures.
The nuclear medicine computer may require millions of lines of source code to provide quantitative analysis packages for each of the specific imaging techniques available in nuclear medicine. Time sequences can be further analysed using kinetic models such as multi-compartment models or a Patlak plot.
Radiation Safety
The radiation protection requirements in nuclear medicine are unique and different from general radiation safety measures used for diagnostic x-ray. Most of the radionuclides used in nuclear medicine are in either liquid or gaseous form. Because of the nature of radioactive decay, these liquids or gases continually emit radiation (unlike diagnostic x –ray which can turn on and off mechanically) and therefore require special precautions.
The radiation dose from a nuclear medicine investigation is expressed as an effective dose with units of sieverts (usually given in millisieverts, mSv). The effective dose resulting from an investigation is influenced by the amount of radioactivity administered in megabecquerels (MBq), the physical properties of the radiopharmaceutical used, its distribution in the body and its rate of clearance from the body.
Effective doses can range from 6 μSv (0.006 mSv) for a 3 MBq chromium-51 EDTA measurement of glomerular filtration rate to 37 mSv for a 150 MBq thallium-201 non-specific tumour imaging procedure. The common bone scan with 600 MBq of technetium-99m-MDP has an effective dose of 3 mSv (1).
Formerly, units of measurement were the curie (Ci), being 3.7E10 Bq, and also 1.0 grams of Radium (Ra-226); the rad (radiation absorbed dose), now replaced by the gray; and the rem (Röntgen equivalent man), now replaced with the sievert. The rad and rem are essentially equivalent for almost all nuclear medicine procedures, and only alpha radiation will produce a higher Rem or Sv value, due to its much higher Relative Biological Effectiveness (RBE). Alpha emitters are nowadays rarely used in nuclear medicine, but were used extensively before the advent of nuclear reactor and accelerator produced radioisotopes. The concepts involved in radiation exposure to humans is covered by the field of Health Physics.
In order to provide protection while handling radioactive material, most compounding is done behind leaded glass shielding and using leaded glass syringe shields and lead containers to hold the radioactive material. Lead is an excellent shielding material that serves to protect the nuclear worker from the radioactive emissions.
FLUOROSCOPY
Fluoroscopy is a dinamic radiographic examination, compared to diagnostic radiography, which is static in character. Fluoroscopy is an imaging technique to obtain real-time images of the internal structures of a patient through the use of a fluoroscope In its simplest form, a fluoroscope consists of an x-ray source and fluorescent screen between which a patient is placed. However, modern fluoroscopes (digital Fluoroscopy) couple the screen to an x-ray image intensifier or Flat-Panels detector and CCD video camera allowing the images to be played and recorded on a monitor. The use of x-rays, a form of ionizing radiation, requires that the potential risks from a procedure be carefully balanced with the benefits of the procedure to the patient. While physicians always try to use low dose rates during fluoroscopy procedures, the length of a typical procedure often results in a relatively high absorbed dose to the patient. Recent advances include the digitization of the images captured and flat-panel detector systems which reduce the radiation dose to the patient still further
Types of Equipment
The Fluoroscopic x-ray tube and image receptor are mounted on a C-arm to maintain their alignment at all times. The C-arm permits the image receptor to be raised and lowered to vary the beam geometry for maximum resolution while the X-ray tube remains in position. It also permits scanning the length and width of the x-ray table. There are two types of C-arm arrangements, both described by the location of the x-ray tube. Under-table units have the x-ray tube under the table while over-table units suspend the tube over the patient. The arm that supports the equipment suspended over the table is called the carriage.
Fluoroscopy Equipment
Fluoroscopy X-Ray Tube
Fluoroscopy X-Ray Tubes are very similar to diagnostic tubes except that they are designed to operate for longer periods of time at much lower mA. The fluoroscopic tube is operated by foot switch, which permits the fluoroscopist to have both hands free to operate the carriage and position and palpate the patient
X-ray Image Intensifiers
An image intensifier is a device that intensifies low light-level images to light levels that can be seen with the human eye or can be detected by a video camera. An image intensifier is a vacuum tube, having an input window on which inside surface a light sensitive layer called the photocathode has been deposited. Photons are absorbed in the photocathode and give rise to emission of electrons into the vacuum. These electrons are accelerated by an electric field to increase their energy and focus them. After multiplication by an MCP (multi channel plate) these electrons will finally be accelerated towards the anode screen. The anode screen contains a layer of phosphorescent material that is covered by a thin aluminium film. When striking the anode the energy of the electrons is converted into photons again. Because of the multiplication and increased energy of the electrons the output brightness is higher as compared to the original input light intensity.
Modern image intensifiers no longer use a separate fluorescent screen. Instead, a caesium iodide phosphor is deposited directly on the photocathode of the intensifier tube. On a typical general purpose system, the output image is approximately 105 times brighter than the input image. This brightness gain comprises a flux gain (amplification of photon number) and minification gain (concentration of photons from a large input screen onto a small output screen) each of approximately 100. This level of gain is sufficient that quantum noise, due to the limited number of x-ray photons, is a significant factor limiting image quality.
Image intensifiers are available with input diameters of up to 45 cm, and a resolution of approximately 2-3 line pairs mm-1.
Flat-panel detectors
The introduction of flat-panel detectors allows for the replacement of the image intensifier in fluoroscope design. Flat panel detectors offer increased sensitivity to X-rays, and therefore have the potential to reduce patient radiation dose. Temporal resolution is also improved over image intensifiers, reducing motion blurring. Contrast ratio is also improved over image intensifiers: flat-panel detectors are linear over a very wide latitude, whereas image intensifiers have a maximum contrast ratio of about 35:1. Spatial resolution is approximately equal, although an image intensifier operating in 'magnification' mode may be slightly better than a flat panel.
Flat panel detectors are considerably more expensive to purchase and repair than image intensifiers, so their uptake is primarily in specialties that require high-speed imaging, e.g., vascular imaging and cardiac catheterization.
Video Camera Tubes
The vidicon and plumbicon tubes are similar in operation, differing mainly in their target layer. A Plumbicon tube has a faster response time than a vidicon tube. The tube consists of a cathode with a control grid, a series of electromagnetic focusing and electrostatic deflecting coils, and an anode with face plate, signal plate and target.
Video Camera Charge-Coupled Devices (CCD)
A CCD is a semiconducting device capable of storing a charge from light photons striking a photosensitive surface. When light strikes the photoelectric cathode of the CCD, electrons are realeased proportionally to the intensity of the incident light. As with all semiconductors, the CCD has the ability to store the freed electrons in aseries of P and N holes, thus storing the image in altent form. The video signal is emitted in a raster scanning pattern by moving the stored charges along the P and N holes to the edge of the CCD, where are discherged as pulses into a conductor. The primary advantage of CCDs is the extremely fast discharge time, which elimantes image lag. This is extremely useful in high speed imaging applications such as cardiac catheterization. Other Advantages are that CCDs are mor sensitive then video tubes, they operate at much lower voltages, which prolongs their life, they have acceptable resolution and they are not as susceptible to damage from rough handling.
Risks
Radiation exposure to patients and laboratory staff has been recognized as a necessary hazard in fluoroscopic procedures. Fluoroscopic procedures pose a potential health risk to the patient and to those staff working close by because of the long length of exposure times. Radiation doses to the patient depend greatly on the size of the patient as well as length of the procedure, with typical skin dose rates quoted as 20-50 mGy/min. Exposure times vary depending on the procedure being performed. Staff doses are linked to patient doses because they result from secondary scattered radiation arising mainly from the patient. Staff may also be exposed to primary leakage radiation that is generated at the X-ray target and which has penetrated the leaded X-ray tube housing. The radiographic projection is relevant in determining the scatter distribution around a patient. Oblique angles lead to higher exposure factors and therefore more scatter. At diagnostic energies, the Compton interaction leads predominantly to back-scatter in the direction of the X-ray tube. This means that there are higher levels of exposure on this side of the patient, which is an important result for the radiation protection education of staff.
So Fluoroscopic units operate with the minimum radiation output possible for the efficiency of the imaging system. The staff has a duty to require that anyone present in the fluoroscopy room during an examination wear a lead apron.
THE X-RAY CONTRAST MEDIUM
The x-ray contrast medium are compounds indicated for the enhancement of radiography contrast in x-ray image such as : computerized tomography (CT), digital subtraction angiography (DSA), digestive sytem, biliary system, Intravenous urography, phlebography of extremities, venography, arteriography, visualization of body cavities (e.g arthrography, hysterosalpingography, fistulography, dacrycistography), myelography, ventriculography, cisternography and other diagnostic procedures.
CONTRAST MEDIUM PREPARATIONS
There are three contrast medium preparations used in x-ray examinations ; Barium (Ba), Iodine (I) and Thorium (Th) but generally only Barium and Iodine preparation that still used in x-ray examination.
BARIUM PREPARATIONS
This is a suspension of powdered barium sulphate in a water. Barium sulphate is insoluble and chemically quite inert. Soluble salts of barium are very poisonous and only pharmaceutical quality barium sulphate should be used. Barium depends for its radiopacity on its electron density (reflected indirectly by its atomic number) which is much greater than the radio-opacity of soft-tissue and greater than the radio-opacity of bone. Barium sulfate, an insoluble white powder. This is mixed with water and some additional ingredients to make the contrast agent. As the barium sulfate doesn't dissolve, this type of contrast agent is an opaque white mixture. It is only used in the digestive tract; it is usually administered as an enema (for osepahogography, gaster, and intestinum tenue) or via rectal for colon.. After the examination, it leaves the body with the feces.
Should barium leak from the G.I. tract into tissues or into a body cavity eg. mediastinum or peritoneum, it can cause a fibrogranulomatous reaction. Spill into the bronchial tree is a manageable problem unless it is gross, when death may ensue; weak barium preparations have been used for bronchography. After oral administration, it may compact in the large bowel causing constipation and occasionally may precipitate obstruction if there is a predisposing pathology.
The ideal barium sulphate/water mixture has yet to be developed, but the following properties are of utmost importance.
a) Particle size. Ordinary barium sulphate particles are coarse, measuring several millimetres in size, but ultrafine milling of the crude barium sulphate results in 50 per cent of the particles having a size of between 5 Ecm and l5lCm. As rate of sedimentation is proportional to particle size, the smaller the barium sulphate particle the more stable the suspension.
(b) Non-ionic medium. The charge on the barium sulphate particle influences the rate of aggregation of the particles. Charged particles attract each other and thus form larger particles which sediment more readily. They tend to do this even more in the gastric contents and consequently sediment more readily in the stomach.
(c) pH of the solution. The pH of the barium sulphate solution should be around 5•3, as more acid solutions tend to become more so in the gastric contents and consequently precipitate more readily in the stomach.
(d) Palatability. Undoubtedly ultrafine milling reduces much of the chalky taste inherent in any barium sulphate/water mixture, but many commercial preparations contain a flavouring agent which further disguises the unpleasant taste. The barium sulphate/water mixture is usually 1/4 weight/volume, and has a viscosity of 15-20 cp, but thicker or thinner suspensions may be used. Many commercial preparations contain carboxymethyl cellulose (Raybar, Barosperse), which retains fluid and prevents precipitation of the barium suspension in the normal small bowel.
The development of the double contrast technique has stressed the need for adequate mucosal coating and much of the present manufacturing efforts are devoted to achieving this. An excess of mucus and undue collection of fluid in the stomach greatly inhibit adequate coating of the gastric mucosa, as does hypermotility of the stomach.
To achieve double contrast examination of the stomach, air or carbon dioxide gas must be introduced and there is no doubt that introduction of air or gas via a nasogastric tube is the best means of obtaining a controlled degree of gastric distension. However, the passage of a gastric tube is an unpleasant procedure and is not acceptable to all patients. Consequently most radiologists use effervescent tablets (sodium bicarbonate 35 mg, tartaric acid 35 mg, calcium carbonate 50 mg) to react with the gastric contents to produce carbon dioxide.
The amount of gas produced by these methods is variable and overdistension of the stomach in the double contrast technique associated with poor coating can be, from a diagnostic viewpoint, as disastrous as inadequate distension. Some commercial preparations contain carbon dioxide gas under pressure in the barium mixture, but usually the quantity of gas is not adequate to produce good double contrast meals. An anti-foaming agent may need to be added to some barium preparations to avoid the formation of bubbles.
Water soluble iodine-containing contrast media are of value when there is a suspected perforation or leakage of an anastomosis after operation. The low radio-opacity of the iodine compared with the barium, and the high osmolarity which results in dilution within the small bowel, make it of little value for routine use in investigation of the small bowel. Water soluble contrast media are contraindicated if there is any danger of aspiration into the lungs.
IODINE PREPARATIONS
Since their introduction in the 1950s, organic radiographic iodinated contrast medium (ICM) have been among the most commonly prescribed drugs in the history of modern medicine. The phenomenon of present-day radiologic imaging would be lacking without these agents. ICM generally have a good safety record. Adverse effects from the intravascular administration of ICM are generally mild and self-limited; reactions that occur from the extravascular use of ICM are rare. Nonetheless, severe or life-threatening reactions can occur wAll currently used ICM are chemical modifications of a 2,4,6-tri-iodinated benzene ring. They are classified on the basis of their physical and chemical characteristics, including their chemical structure, osmolality, iodine content, and ionization in solution. In clinical practice, categorization based on osmolality is widely used. ith either route of administration.
Types of Iodinated Contrast Medium
The more iodine, the more "dense" the x-ray effect. There are many different molecules. Some examples of organic iodine molecules are iohexol, iodixanol, ioversol. Iodine based contrast media are water soluble and as harmless as possible to the body. These contrast medium are sold as clear colorless water solutions, the concentration is usually expressed as mg I/ml. Modern iodinated contrast medium can be used almost anywhere in the body. Most often they are used intravenously, but for various purposes they can also be used intraarterially, intrathecally (the spine) and intraabdominally - just about any body cavity or potential space. The contras medium both ionic and non-ionic consist of monomer (1 benzoate acid ring) and dimmer ( 2 benzoate acid ring).
High-osmolality contrast media
Ionic monomers
High-osmolality contrast media consist of a tri-iodinated benzene ring with 2 organic side chains and a carboxyl group. The iodinated anion, diatrizoate or iothalamate, is conjugated with a cation, sodium or meglumine; the result is an ionic monomer. The ionization at the carboxyl-cation bond makes the agent water soluble. Thus, for every 3 iodine atoms, 2 particles are present in solution (ie, a ratio of 3:2).
The osmolality in solution ranges from 600 to 2100 mOsm/kg, versus 290 mOsm/kg for human plasma. The osmolality is related to some of the adverse events of these contrast media.Ionic monomers are subclassified by the percentage weight of the contrast agent molecule in solution
Low-osmolality contrast media
There are 3 types of low-osmolality ICM:, (1) ionic dimers, (2) nonionic monomers and (3) nonionic dimers.
Ionic dimers
Ionic dimers are formed by joining 2 ionic monomers and eliminating 1 carboxyl group. These agents contain 6 iodine atoms for every 2 particles in solution (ie, a ratio of 6:2). The only commercially available ionic dimer is ioxaglate. Ioxaglate has a concentration of 59%, or 320 mg I/mL, and an osmolality of 600 mOsm/kg. Because of its high viscosity, ioxaglate is not manufactured at higher concentrations. Ioxaglate is used primarily for peripheral arteriography.
Nonionic monomers
In nonionic monomers, the tri-iodinated benzene ring is made water soluble by the addition of hydrophilic hydroxyl groups to organic side chains that are placed at the 1, 3, and 5 positions. Lacking a carboxyl group, nonionic monomers do not ionize in solution. Thus, for every 3 iodine atoms, only 1 particle is present in solution (ie, a ratio of 3:1). Therefore, at a given iodine concentration, nonionic monomers have approximately one half the osmolality of ionic monomers in solution. At normally used concentrations, 25-76%, nonionic monomers have 290-860 mOsm/kg.
Nonionic monomers are subclassified according to the number of milligrams of iodine in 1 mL of solution (eg, 240, 300, or 370 mg I/mL).
The larger side chains increase the viscosity of nonionic monomers compared with ionic monomers. The increased viscosity makes nonionic monomers harder to inject, but it does not appear to be related to the frequency of adverse events.
Common nonionic monomers are iohexol, iopamidol, ioversol, and iopromide.
The nonionic monomers are the contrast agents of choice. In addition to their nonionic nature and lower osmolalities, they are potentially less chemotoxic than the ionic monomers.
Nonionic dimers
Nonionic dimers consist of 2 joined nonionic monomers. These substances contain 6 iodine atoms for every 1 particle in solution (ie, ratio of 6:1). For a given iodine concentration, the nonionic dimers have the lowest osmolality of all the contrast agents. At approximately 60% concentration by weight, these agents are iso-osmolar with plasma. They are also highly viscous and, thus, have limited clinical usefulness
An older type of contrast medium, Thorotrast was based on thorium dioxide, but this was abandoned since it turned out to be carcinogenic.
Properties of Iodine Contrast Medium
Osmolality, viscosity, and iodine concentration are three physico-chemical properties that are inter-related with each other, and are also influenced vy the structure and size of the iodine-binding molecule. The expression of each property can vary greatly among contras medium
During the last decade, innovations in the field of X-ray contras medium have focused on manipulation of these properties. However, due to their relatedness, a change in one property may cause a change in another one, at times unfavourably so. For example, effort to decrease osmolality have led to iso-osmolar product however, it has been at the cost of an unwanted higher level of viscosity.
Osmolality is a count of the number of particles in a fluid sample. The unit for counting is the mole which is equal to 6.02 x 1023 particles (Avogadro's Number). Molarity is the number of particles of a particular substance in a volume of fluid (eg mmol/L) and molality is the number of particles disolved in a mass weight of fluid (mmol/kg). Osmolality is a count of the total number of osmotically active particles in a solution and is equal to the sum of the molalities of all the solutes present in that solution. For most biological systems the molarity and the molality of a solution are nearly exactly equal because the density of water is 1 kg/L. There is a slight difference between molality and molarity in plasma because of the non-aqueous components present such as proteins and lipids which make up about 6% of the total volume. Thus serum is only 94% water and the molality of a substance in serum is about 6% higher than its molarity.
Osmolality can be calculated by the following formulation :
Many of the side effects are due to the hyperosmolar solution being injected. i.e. they deliver more iodine atoms per molecule.
Side-effects of Iodine contrast medium (ICM)
The use of Iodine contrast medium (ICM) may cause untoward side ffects and manifestations of anaphylaxis. The symptoms include nousea, vomiting, widespread erythema, generalized heat sensation, headache, coryza or laryngeal edema, fever, sweating, asthenia, dizziness, pallor, dyspnoea and moderate hypotension. More severe reaction involving the cardiovasluar system such as peripheral vasodilation with pronounced hypotension, tachycardia, dyspnoea, agitation, cyanosis and loss of consciousness, may require emergency treatment. For these reason the use of contrast medium must be limited to cases for which the diagnostic procedure is definitely indicated
Side effects in association with the intravascular use of iodinated contrast medium are ussually of a mild to moderate and temporary nature, and are less frequent with non-ionic than with ionic preparations..
Adverse reactions to ICM are classified as idiosyncratic and nonidiosyncratic.The pathogenesis of such adverse reactions probably involves direct cellular effects; enzyme induction; and activation of the complement, fibrinolytic, kinin, and other systems.
Idiosyncratic reactions
Idiosyncratic reactions typically begin within 20 minutes of the ICM injection, independent of the dose that is administered. A severe idiosyncratic reaction can occur after an injection of less than 1 mL of a contrast agent.
Although reactions to ICM have the same manifestations as anaphylactic reactions, these are not true hypersensitivity reactions. Immunoglobulin E (IgE) antibodies are not involved. In addition, previous sensitization is not required, nor do these reactions consistently recur in a given patient. For these reasons, idiosyncratic reactions to ICM are called anaphylactic reactions.
Anaphylactoid reactions
Anaphylactoid reactions occur rarely (Karnegis and Heinz, 1979; Lasser et al, 1987; Greenberger and Patterson, 1988), but can occur in response to injected as well as oral and rectal contrast and even retrograde pyelography. They are similar in presentation to anaphylactic reactions, but are not caused by an IgE-mediated immune response. Patients with a history of contrast reactions, however, are at increased risk of anaphylactoid reactions (Greenberger and Patterson, 1988; Lang et al, 1993). Pretreatment with corticosteroids has been shown to decrease the incidence of adverse reactions (Lasser et al, 1988; Greenberger et al, 1985; Wittbrodt and Spinler, 1994). The symptoms of anaphylactic reaction can be classified as mild, moderate, and severe.
Mild symptoms
Mild symptoms include the following: scattered urticaria, which is the most commonly reported adverse reaction; pruritus; rhinorrhea; nausea, brief retching, and/or vomiting; diaphoresis; coughing; and dizziness. Patients with mild symptoms should be observed for the progression or evolution of a more severe reaction, which requires treatment.
Moderate symptoms
Moderate symptoms include the following: persistent vomiting; diffuse urticaria; headache; facial edema; laryngeal edema; mild bronchospasm or dyspnea; palpitations, tachycardia, or bradycardia; hypertension; and abdominal cramps.
Severe symptoms
Severe symptoms include the following: life-threatening arrhythmias (ie, ventricular tachycardia), hypotension, overt bronchospasm, laryngeal edema, pulmonary edema, seizures, syncope, and death.
Anaphylactoid reactions range from urticaria and itching, to bronchospasm and facial and laryngeal edema. For simple cases of urticaria and itching, Benadryl (diphenhydramine) oral or IV is appropriate. For more severe reactions, including bronchospasm and facial or neck edema, albuterol inhaler, or subcutaneous or IV epinephrine, plus diphenhydramine may be needed. If respiration is compromised, an airway must be established prior to medical management.
Nonidiosyncratic reactions
Nonidiosyncratic reactions include the following: bradycardia, hypotension, and vasovagal reactions; neuropathy; cardiovascular reactions; extravasation; and delayed reactions. Other nonidiosyncratic reactions include sensations of warmth, a metallic taste in the mouth, and nausea and vomiting.
Bradycardia, hypotension, and vasovagal reactions
By inducing heightened systemic parasympathetic activity, ICM can precipitate bradycardia (eg, decreased discharge rate of the sinoatrial node, delayed atrioventricular nodal conduction) and peripheral vasodilatation. The end result is systemic hypotension with bradycardia. This may be accompanied by other autonomic manifestations, including nausea, vomiting, diaphoresis, sphincter dysfunction, and mental status changes. Untreated, these effects can lead to cardiovascular collapse and death. Some vasovagal reactions may be a result of coexisting circumstances such as emotion, apprehension, pain, and abdominal compression, rather than ICM administration.
Cardiovascular reactions
ICM can cause hypotension and bradycardia. Vasovagal reactions, a direct negative inotropic effect on the myocardium, and peripheral vasodilatation probably contribute to these effects. The latter 2 effects may represent the actions of cardioactive and vasoactive substances that are released after the anaphylactic reaction to the ICM. This effect is generally self-limiting, but it can also be an indicator of a more severe, evolving reaction.
ICM can lower the ventricular arrhythmia threshold and precipitate cardiac arrhythmias and cardiac arrest. Fluid shifts due to an infusion of hyperosmolar intravascular fluid can produce an intravascular hypervolemic state, systemic hypertension, and pulmonary edema. Also, ICM can precipitate angina.
The similarity of the cardiovascular and anaphylactic reactions to ICM can create confusion in identifying the true nature of the type and severity of an adverse reaction; this confusion can lead to the overtreatment or undertreatment of symptoms.
Other nonidiosyncratic reactions include syncope; seizures; and the aggravation of underlying diseases, including pheochromocytomas, sickle cell anemia, hyperthyroidism, and myasthenia gravis.
Extravasation
Extravasation of ICM into soft tissues during an injection can lead to tissue damage as a result of direct toxicity of the contrast agent or pressure effects, such as compartment syndrome.
Delayed reactions
Delayed reactions become apparent at least 30 minutes after but within 7 days of the ICM injection. These reactions are identified in as many as 14-30% of patients after the injection of ionic monomers and in 8-10% of patients after the injection of nonionic monomers.
Common delayed reactions include the development of flulike symptoms, such as the following: fatigue, weakness, upper respiratory tract congestion, fevers, chills, nausea, vomiting, diarrhea, abdominal pain, pain in the injected extremity, rash, dizziness, and headache.
Less frequently reported manifestations are pruritus, parotitis, polyarthropathy, constipation, and depression.
These signs and symptoms almost always resolve spontaneously; usually, little or no treatment is required. Some delayed reactions may be coincidental.
Nephropathy
Contrast-induced nephropathy is defined as either a greater than 25% increase of serum creatinine or an absolute increase in serum creatinine of 0.5 mg/dL. Three factors have been associated with an increased risk of contrast-induced nephropathy: preexisting renal insufficiency (such as Creatinine clearance < 60 mL/min [1.00 mL/s] - calculator online calculator), preexisting diabetes, and reduced intravascular volume (McCullough, 1997; Scanlon et al, 1999).
The osmolality of the contrast mdium is believed to be of great importance in contrast-induced nephropathy. Ideally, the contrast agent should be isoosmolar to blood. Modern iodinated contrast medium are non-ionic, the older ionic types caused more adverse effects and are not used much anymore.
To minimize the risk for contrast-induced nephropathy, various actions can be taken if the patient has predisposing conditions. These have been reviewed in a meta-analysis.
1. The dose of contrast medium should be as low as possible, while still being able to perform the necessary examination.
2. Non-ionic contrast medium
3. Iso-osmolar, nonionic contrast medium. One randomized controlled trial found that an iso-osmolar, nonionic agent was superior to a non-ionic agent contrast media.
4. IV fluid hydration with saline. There is debate as to the most effective means of IV fluid hydration. One method is 1 mg/kg per hour for 6-12 hours before and after the the contrast.
5. IV fluid hydration with saline plus sodium bicarbonate. As an alternative to IV hydration with plain saline, administration of sodium bicarbonate 3 mL/kg per hour for 1 hour before , followed by 1 mL/kg per hour for 6 hours after contrast was found superior to plain saline on one randomized controlled trial. This was subsequently corroborated by a multi-center randomized controlled trial, which also demonstrated that IV hydration with sodium bicarbonate was superior to 0.9% normal saline. The renoprotective effects of bicarbonate are thought to be due to urinary alkalinization, which creates an environment less amenable to the formation of harmful free radicals.
6. N-acetylcysteine (NAC). NAC, 600 mg orally twice a day, on the day before and of the procedure if creatinine clearnace is estimated to be less than 60 mL/min [1.00 mL/s]). A randomized controlled trial found higher doses of NAC (1200-mg IV bolus and 1200 mg orally twice daily for 2 days) benefited (relative risk reduction of 74%) patients receiving coronary angioplasty with higher volumes of contrast . Some recent studies suggest that N-acetylcysteine protects the kidney from the toxic effects of the contrast agent (Gleeson & Bulugahapitiya 2004). This effect is, in any case, not overwhelming. Some researchers (e.g. Hoffmann et al 2004) even claim that the effect is due to interference with the creatinine laboratory test itself. This is supported by a lack of correlation between creatinine levels and cystatin C levels.
Other pharmacological agents, such as furosemide, mannitol, theophylline, aminophylline, dopamine, and atrial natriuretic peptide have been tried, but have either not had beneficial effects, or had detrimental effects (Solomon et al, 1994; Abizaid et al, 1999).
DSA
DIGITAL SUBTRACTION ANGIOGRAPHY
(DSA)
. Angiograhpy is an X-ray examination with radio-opaque contrast medium in the vascular system to image the configuration of vascular circulation. In conventional angiography procedure, images are acquired by exposing an area of interest with time-controlled x-rays while injecting contrast medium into the blood vessels. The image obtained would also include all overlying structure besides the blood vessels in this area like bone and soft tissue.
Digital Subtraction Angiography (DSA) is a digital vascular imaging used in interventional radiology to clearly visualize blood vessels without the image result also include all overlying structure in this area like bone and soft tissue by subtracting a 'pre-contrast image' or the mask from later images, once the contrast medium has been introduced into a structure. So DSA combines the digitization of an image with subtraction technique. The most common use of DSA is with flouroscopic angiography as a subtitute for static serial angiographic films produced by a rapid film changer..
DSA was developed during the 1970s by groups at the University of Wisconsin, The University of Arizona, and the Kinderklinik at the University of Kiel. This work led to the development of commercial system that were introduced in 1980. Within the next few years many manufacturers of x-ray equipment introduced DSA product. After several years of rapid change , the system evolved to those available today. The primary changes since the introduction of DSA in 1980 include improved image quality, larger pixel matrices (up to 1024 x 1024), and fully digital system. Image Quality has improved for two reason : (1) the component parts (e.g., the image intensifier, television camera) have been improved, and (2)the component parts have been more effectively integrated into the system, since the early system were built using component part selected “off the shelf’ and they may or may not have been properly matched.
Equipment and Apparatus
An Image intensifier-television system (fluoroscopy) can be used to form images with little electrical interference, provide moderate resolution, and yield diagnostic quality images when combined with a high-speed image processor in DSA system. The television camera is focused onto the image-intensifier output phosphor and converts the light intensity into an electrical signal.
The image processor consist of a computer and image processing hardware. The computer control various components (e.g., memories, image processing hardware, and x-ray generator), and the image processing hardware gives the system the speed to do many images processing operation in real time.
DSA depends on the mating of high-resolution image-intensifier and television technology with computerized information manipulation and storage
X-ray Image Intensifiers
An image intensifier is a device that intensifies low light-level images to light levels that can be seen with the human eye or can be detected by a video camera. An image intensifier is a vacuum tube, having an input window on which inside surface a light sensitive layer called the photocathode has been deposited. Photons are absorbed in the photocathode and give rise to emission of electrons into the vacuum. These electrons are accelerated by an electric field to increase their energy and focus them. After multiplication by an MCP (multi channel plate) these electrons will finally be accelerated towards the anode screen. The anode screen contains a layer of phosphorescent material that is covered by a thin aluminium film. When striking the anode the energy of the electrons is converted into photons again. Because of the multiplication and increased energy of the electrons the output brightness is higher as compared to the original input light intensity.
Modern image intensifiers no longer use a separate fluorescent screen. Instead, a caesium iodide phosphor is deposited directly on the photocathode of the intensifier tube. On a typical general purpose system, the output image is approximately 105 times brighter than the input image. This brightness gain comprises a flux gain (amplification of photon number) and minification gain (concentration of photons from a large input screen onto a small output screen) each of approximately 100. This level of gain is sufficient that quantum noise, due to the limited number of x-ray photons, is a significant factor limiting image quality.
Image intensifiers are available with input diameters of up to 45 cm, and a resolution of approximately 2-3 line pairs mm-1.
Flat-panel detectors
The introduction of flat-panel detectors allows for the replacement of the image intensifier in fluoroscope design. Flat panel detectors offer increased sensitivity to X-rays, and therefore have the potential to reduce patient radiation dose. Temporal resolution is also improved over image intensifiers, reducing motion blurring. Contrast ratio is also improved over image intensifiers: flat-panel detectors are linear over a very wide latitude, whereas image intensifiers have a maximum contrast ratio of about 35:1. Spatial resolution is approximately equal, although an image intensifier operating in 'magnification' mode may be slightly better than a flat panel.
Flat panel detectors are considerably more expensive to purchase and repair than image intensifiers, so their uptake is primarily in specialties that require high-speed imaging, e.g., vascular imaging and cardiac catheterization.
Fluoroscopy
Fluoroscopy is a dinamic radiographic examination, compared to diagnostic radiography, which is static in character. Fluoroscopy is an imaging technique to obtain real-time images of the internal structures of a patient through the use of a fluoroscope In its simplest form, a fluoroscope consists of an x-ray source and fluorescent screen between which a patient is placed. However, modern fluoroscopes couple the screen to an x-ray image intensifier and CCD video camera allowing the images to be played and recorded on a monitor. The use of x-rays, a form of ionizing radiation, requires that the potential risks from a procedure be carefully balanced with the benefits of the procedure to the patient. While physicians always try to use low dose rates during fluoroscopy procedures, the length of a typical procedure often results in a relatively high absorbed dose to the patient. Recent advances include the digitization of the images captured and flat-panel detector systems which reduce the radiation dose to the patient still further.
The Basic Principles of DSA
Under the flouroskopy control the patient is injected with contrast medium direcly to blood vessel or through a catheter and the blood vessels in the anatomical region of interest are then highlighted on a sequence of radiographical images.
In order to clearly visualize blood vessels in a bony or dense soft tissue environment., first a mask image is acquired. The mask image is simply an image of the same area before the contrast is administered. The radiological equipment used to capture this is usuallly an image intensifier, which will then keep producing images of the same area at a set rate (1 - 6 frames per second), taking all subsequent images away from the original 'mask' image. The radiologist controls how much contrast media is injected and for how long. Smaller structures require less contrast to fill the vessel than others. Images produced appear with a very pale grey background, which produces a high contrast to the blood vessels, which appear a very dark grey.The images are all produced in real time by the computer, as the contrast is injected into the blood vessels.
Radiation Exposure
Radiation exposure from X-ray angiography procedures are relatively high when compared with conventional radiographic procedures. Angiography procedures can generate highly localized doses to the skin of patients, which may be above the threshold for deterministic injuries as well as carrying an increased risk of cancer induction. Staff doses are linked to patient doses because they result from secondary scattered radiation arising mainly from the patient. Staff may also be exposed to primary leakage radiation that is generated at the X-ray target and which has penetrated the leaded X-ray tube housing. Without due care and understanding, multiple procedures could lead to serious injury. This highlights the need to optimize the imaging equipment used during angiography and to properly use any dose saving techniques. The training of staff working in the vicinity of X-ray equipment is also of paramount importance. Radiation exposure to patients and laboratory staff has been recognized as a necessary hazard in angiography.
Procedures that utilize ionizing radiation should be performed in accordance with the As Low As Reasonably Achievable (ALARA) philosophy. Thus, personels ordering and performing angiography should be very familiar with the dosage of radiation from angiography procedures and ways in which dose can be minimized.
The Ionising Radiations Regulations 1999 require that measures are taken to minimize the radiation dose received by those working in a radiation environment. This is normally achieved by ensuring that those persons working within "Controlled Areas" are adequately trained in matters relating to radiation protection. For some of these groups (e.g. radiologist, cardiologists and radiographers), training in such matters forms a significant part of their basic training.
Specific points to impart are:
1. Digital acquisitions lead to much higher doses that fluoroscopy.
2. When imaging oblique angles, the scatter on the X-ray tube side is greater than that on the intensifier side.
3. Lead protection must be carefully placed to ensure continuity of protection.
4. Distance from the patient is an effective method of dose reduction.
COMPUTERS FUNDAMENTALS AND APPLICATIONS IN MEDICAL IMAGING
COMPUTERS FUNDAMENTALS AND APPLICATIONS IN MEDICAL IMAGING
By ; Sumarsono
The Progressive and evolutionary growth in medicine would not be possible without the aid of computers. As a result of the applications of the computer in the storage, analysis, and manipulation of data, pathologic conditions can be diagnosed more accurately and earlier in the disease process, resulting in an increased patient cure rate. The increasing use of computers in medical science clearly demonstrates the need for qualified personnel who can understand and operate computerized equipment.
History of computer
History of computer could be traced back to the effort of man to count large numbers. This process of counting of large numbers generated various systems of numeration like Babylonian system of numeration, Greek system of numeration, Roman system of numeration and Indian system of numeration. Out of these the Indian system of numeration has been accepted universally. It is the basis of modern decimal system of numeration (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). Later you will know how the computer solves all calculations based on decimal system. But you will be surprised to know that the computer does not understand the decimal system and uses binary system of numeration for processing.
1. Calculating Machines
It took over generations for early man to build mechanical devices for counting large numbers. The first calculating device called ABACUS was developed by the Egyptian and Chinese people. The word ABACUS means calculating board. It consisted of sticks in horizontal positions on which were inserted sets of pebbles. A modern form of ABACUS is given in Fig. 1.2. It has a number of horizontal bars each having ten beads. Horizontal bars represent units, tens, hundreds, etc.
2. Napier’s bones
English mathematician John Napier built a mechanical device for the purpose of multiplication in 1617 A D. The device was known as Napier’s bones.
3. Slide Rule
English mathematician Edmund Gunter developed the slide rule. This machine could perform operations like addition,. subtraction, multiplication, and division. It was widely used in Europe in 16th century.
4. Pascal's Adding and Subtractory Machine
You might have heard the name of Blaise Pascal. He developed a machine at the age of 19 that could add and subtract. The machine consisted of wheels, gears and cylinders.
5. Leibniz’s Multiplication and Dividing Machine
The German philosopher and mathematician Gottfried Leibniz built around 1673 a mechanical device that could both multiply and divide.
6. Babbage’s Analytical Engine
It was in the year 1823 that a famous English man Charles Babbage built a mechanical machine to do complex mathematical calculations. It was called difference engine. Later he developed a general-purpose calculating machine called analytical engine. Charles Babbage is called the father of computer.
7. Mechanical and Electrical Calculator
In the beginning of 19th century. The mechanical calculator was developed to perform all sorts of mathematical calculations. Up to the 1960s it was widely used. Later the rotating part of mechanical calculator was replaced by electric motor. So it was called the electrical calculator.
8. Modern Electronic Calculator
The electronic calculator used in 1960 s was run with electron tubes, which was quite bulky. Later it was replaced with transistors and as a result the size of calculators became too small. The modern electronic calculator can compute all kinds of mathematical computations and mathematical functions. It can also be used to store some data permanently. Some calculators have in-built programs to perform some complicated calculations.
Computers that used vacuum tubess as their electronic elements were in use throughout the 1950s. Vacuum tube electronics were largely replaced in the 1960s by transistor-based electronics, which are smaller, faster, cheaper to produce, require less power, and are more reliable. In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the 1980s, computers became sufficiently small and cheap to replace simple mechanical controls in domestic appliances such as washing machines. The 1980s also witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.
Functional Components of a Computer
The term hardware is used to describe the functional equipment components of a computer and is everything concerning the computer that is visible. Software designates the parts of the computer system that are invisible, such as the machine language and the programs. A computer program may run from just a few instructions to many millions of instructions. A typical modern computer can execute billions of instructions per second (gigahertz or GHz) and rarely make a mistake over many years of operation. Large computer programs comprising several million instructions may take teams of programmers years to write, thus the probability of the entire program having been written without error is highly unlikely.
The computer hardware consist of four functionally independent components : the arithmetic and logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by busses, often made of groups of wires.
Input devices is the process of entering data and programs in to the computer system. Computer is an electronic machine like any other machine which takes as inputs raw data and performs some processing giving out processed data. Therefore, the input unit takes data from user to the computer in an organized manner for processing.
The control unit (often called a control system or central controller) directs the various components of a computer. It reads and interprets (decodes) instructions in the program one by one. The control system decodes each instruction and turns it into a series of control signals that operate the other parts of the computer.Control systems in advanced computers may change the order of some instructions so as to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from. The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:
1. Read the code for the next instruction from the cell indicated by the program counter.
2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register or perhaps an output device.
8. Jump back to step (1).
The process of saving data and instructions permanently is known as storage. Data has to be fed into the system before the actual processing starts. It is because the processing speed of Central Processing Unit (CPU) is so fast that the data has to be provided to CPU with the same speed. Therefore the data is first stored in the storage unit for faster access and processing. This storage unit or the primary storage of the computer system is designed to do the above functionality. It provides space for storing data and instructions.
Computer main memory comes in two principal varieties: random access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM is erased when the power to the computer is turned off while ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the software required to perform the task may be stored in ROM. Software that is stored in ROM is often called firmware because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM by retaining data when turned off but being rewritable like RAM. However, flash memory is typically much slower than conventional ROM and RAM so its use is restricted to applications where high speeds are not required.In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.
The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers—albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other .
Output: This is the process of producing results from the data for getting useful information. Similarly the output produced by the computer after processing must also be kept somewhere inside the computer before being given to in human readable form. Again the output is also stored inside the computer for further processing.
Binary System
In computer’s memory both programs and data are stored in the binary form. The binary system has only two values 0 and 1. As human beings we all understand decimal system but the computer can only understand binary system. It is because a large number of integrated circuits inside the computer can be considered as switches, which can be made ON, or OFF. If a switch is ON it is considered 1 and if it is OFF it is 0. A number of switches in different states will give a message like : 110101....10. So the computer takes input in the form of 0 and 1 and gives output in the form 0 and 1 only. Every number in binary system can be converted to decimal system and vice versa; for example, 1010 meaning decimal 10. Therefore it is the computer that takes information or data in decimal form from user, convert it in to binary form, process it producing output in binary form and again convert the output to decimal form.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers; either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory as long as it can be somehow represented in numerical form. Modern computers have billions or even trillions of bytes of memory.
Applications in Medical Imaging
Analog to Digital Converters
Devices that produce images of real objects, such as patients, produce an analog rather than digital signal. The electrical signal being emitted from the output phosphor of an image intensifier on a fluoroscopic unit, the scintillation crystal of a nuclear medicine detector of a computed tomography (CT) unit, or the piezoelectric crystal of an ultrasound machine is in analog form with a variance in voltage. For these signals to be read as data by the computer, they must by digitized an converted into binary numbers system. The peripheral device that performs this task is the analog-to-digital converter, which transforms the sine wave into discrete increments with binary numbers assigned to each increment. The assignment of the binary numbers depends on output voltage, which in turn represents the degree of attenuation of the various tissue densities within the patient. The basic component of an analog digital converter (ADC) is the “comparator” that outputs a “1” when the voltage equals or exceeds a precise analog voltage and a “0” if the voltage does not equal or exceed this predetermined level. The most significant parameters of an ADC are (1) digitization depth (the number of bits in the resultant binary number); (2) dynamic range (the range of voltage or input signals the result in a digital output); (3) digitization rate (the rate that digitization takes place). Achieving the optimal digitization depth is necessary for resolution quality and flexibility in image manipulation. Digitization depth and dynamic range are analogous and latitude in a radiograph.
To produce a video image, the field size of the image is divided into many cubes or matrix, with each cube assigned a binary number proportional to the degree of the attenuation of the x-ray beam or intensity of the incoming signal. The individual three-dimensional cubes with length, width, and depth are called voxel (volume element), with the degree of attenuation or intensity of the incoming voltage determining their composition and thickness.
Because the technology for displaying three-dimensional objects has not been fully developed, a two-dimensional square or pixel (picture element) represent the voxel on the television display monitor or cathode ray tube. The matrix is an array of pixel arranged in two dimension, length and width, or in row and columns. The more pixel contained in a image, the larger matrix becomes, with the resolution quality of the image improving. For instance, a matrix containing 256 x 256 pixels has atotal of 65,536 pixel or pieces of data; whereas a matrix of 512 x 512 pixels contains 262,144 pieces of data. One should not confuse field size with matrix size. The larger matrix also allows for more manipulation of the data or the image displayed on the television monitor and is very beneficial and useful in the imaging modalities, such as digital substraction fluoroscopy, CT, nuclear medicine, and ultrasound.
DIGITAL IMAGING PROCESSING
Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subfield of digital signal processing, digital image processing has many advantages over analog image processing; it allows a much wider range of algorithms to be applied to the input data, and can avoid problems such as the build-up of noise and signal distortion during processing
Within the computer system, digital images are represented as groups of numbers. Therefore these numbers can be changed through applying mathematic operations, and the result the images is altered, this important concept has provide extraordinary control in contrast enhancement, image enhancement, subtraction techniques, and magnification without losing the original image data.
Contrast Enhancement
Window width encompasses the range densities within an image. A narrow window is comparable to the use of a high-contrast radiographic film. As a result, image contrast is increased. Increasing the width of the window allows more of the gray scale to be visualized or more latitude in the densities of the image visualized. A narrow window is valuable when subtle differences in subject density need to be better visualized. However, the use of a narrow window increases image noise, and densities outside of the narrow window are not visualized.
Image Enhancement or Reconstruction
Image enhancement or reconstruction is accomplished by the use of digital processing of filtering, which can be defined as accenting or attenuating selected frequencies in the image. The filtration methods used in the medical imaging are classified as (1) convolution; (2) low-pass-filtering or smoothing; (3) band-pass filtering; and (4) high-pass filtering, or edge’ enhancement. The background intensities within a medical image consist mainly of low spatial frequencies, whereas an edge, typifying a sudden change in intensities, is composed of mainly high spatial frequencies for example, the bone-to-air interface or the skull when using computed tomography. Spatial noise originating from within the computer system is usually high spatial frequencies. Filtration reduces the amount of high spatial frequencies inherent in the object. The percentage of transmission versus spatial frequency, if ploted on a graph, called modulation transfer function (MTF). One effort of continuing research in medical imaging is a develop systems with higher modulation transfer function.
Convolution is accomplished automatically with computer system that are equipped with fast fourier transforms, the convolutions process is implemented by placing a filter mask array, or matrix, over the image array, or matrix, in the memory. A filter mask is a square array or section with numbers, usually consisting of an area of 3 x 3 elements. The size of the filter mask is determined by the manufacturer of the equipment, although larger mask are not often used because they take longer to process. The convolution filtering process can be conceptualized as placing the filter mask over an area of the image marix, multipliying each element value direcly beneath it, obtaining a sum of these values, and then placing that sum within the output image in the exact location as it was in the original image.
Subtraction Technique
The advantages of using digital substraction include the ability to visualize small anatomic structures and to perform the examination via venous injection of contras media. The most common digital subtraction technique are temporal subtraction and dual energy subtraction. Hybrid subtraction is a combination of these two methods.
Magnification
Magnification, sometimes called zooming, is a process of selecting an area of interest and copying each pixel within the area an integer number of times. Large magnifications may give the image an appearance of being constructed of blocks. To provide a more diagnostic image, a smoothing or low-pass filter operation can be done to smooth out the distinct intensities between the blocks.
Three-Dimensional Image
When three dimensional imaging was first introduced, the images were less than optimal because of the resolution being too low to adequately visualize anatomic structures deeper within the body. The images often adequately displayed only the more dense structures closer to the body surface, or surface boundaries, which appeared blocky and jagged; therefore, the soft tissue or less dense structures were not visualized. By using fast Fourier transforms (3DFT), new algorithms for mathematical calculations, and development of computers with faster processing time, three dimensional images have become smooth, sharply focused, and realistically shaded to demonstrate soft tissue. The ability to demonstrate soft tissue in three dimensional imaging is referred to as a volumetric rendering technique.
Volumetric rendering is a computer program whereby “stack” of sequential images are processed as a volume with the gray scale intensity information in each pixel being interpolated in the z axis (perpendicular to to the x and y axes). Interpolated is necessary because the field of view of the scan (the x and y axes). is not the same as the z axis because of interscan spacing. Following this computer process, new data are generated by interpolation, resulting in each new voxel having all the same dimension. The volumetric rendering technique enables definitions of the object’s thickness, a crucial factor in three dimensional imaging and in visualizing subtle densities.