---THE SOURCE OF MEDICAL IMAGING PROCEDURES AND TECHNOLOGY created by Sumarsono.Rad.Tech,S.Si----

FLUOROSCOPY

Fluoroscopy is a dinamic radiographic examination, compared to diagnostic radiography, which is static in character. Fluoroscopy is an imaging technique to obtain real-time images of the internal structures of a patient through the use of a fluoroscope In its simplest form, a fluoroscope consists of an x-ray source and fluorescent screen between which a patient is placed. However, modern fluoroscopes (digital Fluoroscopy) couple the screen to an x-ray image intensifier or Flat-Panels detector and CCD video camera allowing the images to be played and recorded on a monitor. The use of x-rays, a form of ionizing radiation, requires that the potential risks from a procedure be carefully balanced with the benefits of the procedure to the patient. While physicians always try to use low dose rates during fluoroscopy procedures, the length of a typical procedure often results in a relatively high absorbed dose to the patient. Recent advances include the digitization of the images captured and flat-panel detector systems which reduce the radiation dose to the patient still further

Types of Equipment

The Fluoroscopic x-ray tube and image receptor are mounted on a C-arm to maintain their alignment at all times. The C-arm permits the image receptor to be raised and lowered to vary the beam geometry for maximum resolution while the X-ray tube remains in position. It also permits scanning the length and width of the x-ray table. There are two types of C-arm arrangements, both described by the location of the x-ray tube. Under-table units have the x-ray tube under the table while over-table units suspend the tube over the patient. The arm that supports the equipment suspended over the table is called the carriage.

Fluoroscopy Equipment

Fluoroscopy X-Ray Tube

Fluoroscopy X-Ray Tubes are very similar to diagnostic tubes except that they are designed to operate for longer periods of time at much lower mA. The fluoroscopic tube is operated by foot switch, which permits the fluoroscopist to have both hands free to operate the carriage and position and palpate the patient

X-ray Image Intensifiers

An image intensifier is a device that intensifies low light-level images to light levels that can be seen with the human eye or can be detected by a video camera. An image intensifier is a vacuum tube, having an input window on which inside surface a light sensitive layer called the photocathode has been deposited. Photons are absorbed in the photocathode and give rise to emission of electrons into the vacuum. These electrons are accelerated by an electric field to increase their energy and focus them. After multiplication by an MCP (multi channel plate) these electrons will finally be accelerated towards the anode screen. The anode screen contains a layer of phosphorescent material that is covered by a thin aluminium film. When striking the anode the energy of the electrons is converted into photons again. Because of the multiplication and increased energy of the electrons the output brightness is higher as compared to the original input light intensity.
Modern image intensifiers no longer use a separate fluorescent screen. Instead, a caesium iodide phosphor is deposited directly on the photocathode of the intensifier tube. On a typical general purpose system, the output image is approximately 105 times brighter than the input image. This brightness gain comprises a flux gain (amplification of photon number) and minification gain (concentration of photons from a large input screen onto a small output screen) each of approximately 100. This level of gain is sufficient that quantum noise, due to the limited number of x-ray photons, is a significant factor limiting image quality.
Image intensifiers are available with input diameters of up to 45 cm, and a resolution of approximately 2-3 line pairs mm-1.

Flat-panel detectors

The introduction of flat-panel detectors allows for the replacement of the image intensifier in fluoroscope design. Flat panel detectors offer increased sensitivity to X-rays, and therefore have the potential to reduce patient radiation dose. Temporal resolution is also improved over image intensifiers, reducing motion blurring. Contrast ratio is also improved over image intensifiers: flat-panel detectors are linear over a very wide latitude, whereas image intensifiers have a maximum contrast ratio of about 35:1. Spatial resolution is approximately equal, although an image intensifier operating in 'magnification' mode may be slightly better than a flat panel.
Flat panel detectors are considerably more expensive to purchase and repair than image intensifiers, so their uptake is primarily in specialties that require high-speed imaging, e.g., vascular imaging and cardiac catheterization.

Video Camera Tubes

The vidicon and plumbicon tubes are similar in operation, differing mainly in their target layer. A Plumbicon tube has a faster response time than a vidicon tube. The tube consists of a cathode with a control grid, a series of electromagnetic focusing and electrostatic deflecting coils, and an anode with face plate, signal plate and target.

Video Camera Charge-Coupled Devices (CCD)

A CCD is a semiconducting device capable of storing a charge from light photons striking a photosensitive surface. When light strikes the photoelectric cathode of the CCD, electrons are realeased proportionally to the intensity of the incident light. As with all semiconductors, the CCD has the ability to store the freed electrons in aseries of P and N holes, thus storing the image in altent form. The video signal is emitted in a raster scanning pattern by moving the stored charges along the P and N holes to the edge of the CCD, where are discherged as pulses into a conductor. The primary advantage of CCDs is the extremely fast discharge time, which elimantes image lag. This is extremely useful in high speed imaging applications such as cardiac catheterization. Other Advantages are that CCDs are mor sensitive then video tubes, they operate at much lower voltages, which prolongs their life, they have acceptable resolution and they are not as susceptible to damage from rough handling.

Risks

Radiation exposure to patients and laboratory staff has been recognized as a necessary hazard in fluoroscopic procedures. Fluoroscopic procedures pose a potential health risk to the patient and to those staff working close by because of the long length of exposure times. Radiation doses to the patient depend greatly on the size of the patient as well as length of the procedure, with typical skin dose rates quoted as 20-50 mGy/min. Exposure times vary depending on the procedure being performed. Staff doses are linked to patient doses because they result from secondary scattered radiation arising mainly from the patient. Staff may also be exposed to primary leakage radiation that is generated at the X-ray target and which has penetrated the leaded X-ray tube housing. The radiographic projection is relevant in determining the scatter distribution around a patient. Oblique angles lead to higher exposure factors and therefore more scatter. At diagnostic energies, the Compton interaction leads predominantly to back-scatter in the direction of the X-ray tube. This means that there are higher levels of exposure on this side of the patient, which is an important result for the radiation protection education of staff.
So Fluoroscopic units operate with the minimum radiation output possible for the efficiency of the imaging system. The staff has a duty to require that anyone present in the fluoroscopy room during an examination wear a lead apron.


Read More..

THE X-RAY CONTRAST MEDIUM

The x-ray contrast medium are compounds indicated for the enhancement of radiography contrast in x-ray image such as : computerized tomography (CT), digital subtraction angiography (DSA), digestive sytem, biliary system, Intravenous urography, phlebography of extremities, venography, arteriography, visualization of body cavities (e.g arthrography, hysterosalpingography, fistulography, dacrycistography), myelography, ventriculography, cisternography and other diagnostic procedures.

CONTRAST MEDIUM PREPARATIONS
There are three contrast medium preparations used in x-ray examinations ; Barium (Ba), Iodine (I) and Thorium (Th) but generally only Barium and Iodine preparation that still used in x-ray examination.
BARIUM PREPARATIONS
This is a suspension of powdered barium sulphate in a water. Barium sulphate is insoluble and chemically quite inert. Soluble salts of barium are very poisonous and only pharmaceutical quality barium sulphate should be used. Barium depends for its radiopacity on its electron density (reflected indirectly by its atomic number) which is much greater than the radio-opacity of soft-tissue and greater than the radio-opacity of bone. Barium sulfate, an insoluble white powder. This is mixed with water and some additional ingredients to make the contrast agent. As the barium sulfate doesn't dissolve, this type of contrast agent is an opaque white mixture. It is only used in the digestive tract; it is usually administered as an enema (for osepahogography, gaster, and intestinum tenue) or via rectal for colon.. After the examination, it leaves the body with the feces.
Should barium leak from the G.I. tract into tissues or into a body cavity eg. mediastinum or peritoneum, it can cause a fibrogranulomatous reaction. Spill into the bronchial tree is a manageable problem unless it is gross, when death may ensue; weak barium preparations have been used for bronchography. After oral administration, it may compact in the large bowel causing constipation and occasionally may precipitate obstruction if there is a predisposing pathology.
The ideal barium sulphate/water mixture has yet to be developed, but the following properties are of utmost importance.

a) Particle size. Ordinary barium sulphate particles are coarse, measuring several millimetres in size, but ultrafine milling of the crude barium sulphate results in 50 per cent of the particles having a size of between 5 Ecm and l5lCm. As rate of sedimentation is proportional to particle size, the smaller the barium sulphate particle the more stable the suspension.

(b) Non-ionic medium. The charge on the barium sulphate particle influences the rate of aggregation of the particles. Charged particles attract each other and thus form larger particles which sediment more readily. They tend to do this even more in the gastric contents and consequently sediment more readily in the stomach.

(c) pH of the solution. The pH of the barium sulphate solution should be around 5•3, as more acid solutions tend to become more so in the gastric contents and consequently precipitate more readily in the stomach.

(d) Palatability. Undoubtedly ultrafine milling reduces much of the chalky taste inherent in any barium sulphate/water mixture, but many commercial preparations contain a flavouring agent which further disguises the unpleasant taste. The barium sulphate/water mixture is usually 1/4 weight/volume, and has a viscosity of 15-20 cp, but thicker or thinner suspensions may be used. Many commercial preparations contain carboxymethyl cellulose (Raybar, Barosperse), which retains fluid and prevents precipitation of the barium suspension in the normal small bowel.

The development of the double contrast technique has stressed the need for adequate mucosal coating and much of the present manufacturing efforts are devoted to achieving this. An excess of mucus and undue collection of fluid in the stomach greatly inhibit adequate coating of the gastric mucosa, as does hypermotility of the stomach.

To achieve double contrast examination of the stomach, air or carbon dioxide gas must be introduced and there is no doubt that introduction of air or gas via a nasogastric tube is the best means of obtaining a controlled degree of gastric distension. However, the passage of a gastric tube is an unpleasant procedure and is not acceptable to all patients. Consequently most radiologists use effervescent tablets (sodium bicarbonate 35 mg, tartaric acid 35 mg, calcium carbonate 50 mg) to react with the gastric contents to produce carbon dioxide.
The amount of gas produced by these methods is variable and overdistension of the stomach in the double contrast technique associated with poor coating can be, from a diagnostic viewpoint, as disastrous as inadequate distension. Some commercial preparations contain carbon dioxide gas under pressure in the barium mixture, but usually the quantity of gas is not adequate to produce good double contrast meals. An anti-foaming agent may need to be added to some barium preparations to avoid the formation of bubbles.

Water soluble iodine-containing contrast media are of value when there is a suspected perforation or leakage of an anastomosis after operation. The low radio-opacity of the iodine compared with the barium, and the high osmolarity which results in dilution within the small bowel, make it of little value for routine use in investigation of the small bowel. Water soluble contrast media are contraindicated if there is any danger of aspiration into the lungs.
IODINE PREPARATIONS
Since their introduction in the 1950s, organic radiographic iodinated contrast medium (ICM) have been among the most commonly prescribed drugs in the history of modern medicine. The phenomenon of present-day radiologic imaging would be lacking without these agents. ICM generally have a good safety record. Adverse effects from the intravascular administration of ICM are generally mild and self-limited; reactions that occur from the extravascular use of ICM are rare. Nonetheless, severe or life-threatening reactions can occur wAll currently used ICM are chemical modifications of a 2,4,6-tri-iodinated benzene ring. They are classified on the basis of their physical and chemical characteristics, including their chemical structure, osmolality, iodine content, and ionization in solution. In clinical practice, categorization based on osmolality is widely used. ith either route of administration.


Types of Iodinated Contrast Medium
The more iodine, the more "dense" the x-ray effect. There are many different molecules. Some examples of organic iodine molecules are iohexol, iodixanol, ioversol. Iodine based contrast media are water soluble and as harmless as possible to the body. These contrast medium are sold as clear colorless water solutions, the concentration is usually expressed as mg I/ml. Modern iodinated contrast medium can be used almost anywhere in the body. Most often they are used intravenously, but for various purposes they can also be used intraarterially, intrathecally (the spine) and intraabdominally - just about any body cavity or potential space. The contras medium both ionic and non-ionic consist of monomer (1 benzoate acid ring) and dimmer ( 2 benzoate acid ring).

High-osmolality contrast media
Ionic monomers

High-osmolality contrast media consist of a tri-iodinated benzene ring with 2 organic side chains and a carboxyl group. The iodinated anion, diatrizoate or iothalamate, is conjugated with a cation, sodium or meglumine; the result is an ionic monomer. The ionization at the carboxyl-cation bond makes the agent water soluble. Thus, for every 3 iodine atoms, 2 particles are present in solution (ie, a ratio of 3:2).
The osmolality in solution ranges from 600 to 2100 mOsm/kg, versus 290 mOsm/kg for human plasma. The osmolality is related to some of the adverse events of these contrast media.Ionic monomers are subclassified by the percentage weight of the contrast agent molecule in solution

Low-osmolality contrast media
There are 3 types of low-osmolality ICM:, (1) ionic dimers, (2) nonionic monomers and (3) nonionic dimers.
Ionic dimers
Ionic dimers are formed by joining 2 ionic monomers and eliminating 1 carboxyl group. These agents contain 6 iodine atoms for every 2 particles in solution (ie, a ratio of 6:2). The only commercially available ionic dimer is ioxaglate. Ioxaglate has a concentration of 59%, or 320 mg I/mL, and an osmolality of 600 mOsm/kg. Because of its high viscosity, ioxaglate is not manufactured at higher concentrations. Ioxaglate is used primarily for peripheral arteriography.

Nonionic monomers
In nonionic monomers, the tri-iodinated benzene ring is made water soluble by the addition of hydrophilic hydroxyl groups to organic side chains that are placed at the 1, 3, and 5 positions. Lacking a carboxyl group, nonionic monomers do not ionize in solution. Thus, for every 3 iodine atoms, only 1 particle is present in solution (ie, a ratio of 3:1). Therefore, at a given iodine concentration, nonionic monomers have approximately one half the osmolality of ionic monomers in solution. At normally used concentrations, 25-76%, nonionic monomers have 290-860 mOsm/kg.
Nonionic monomers are subclassified according to the number of milligrams of iodine in 1 mL of solution (eg, 240, 300, or 370 mg I/mL).
The larger side chains increase the viscosity of nonionic monomers compared with ionic monomers. The increased viscosity makes nonionic monomers harder to inject, but it does not appear to be related to the frequency of adverse events.
Common nonionic monomers are iohexol, iopamidol, ioversol, and iopromide.
The nonionic monomers are the contrast agents of choice. In addition to their nonionic nature and lower osmolalities, they are potentially less chemotoxic than the ionic monomers.

Nonionic dimers
Nonionic dimers consist of 2 joined nonionic monomers. These substances contain 6 iodine atoms for every 1 particle in solution (ie, ratio of 6:1). For a given iodine concentration, the nonionic dimers have the lowest osmolality of all the contrast agents. At approximately 60% concentration by weight, these agents are iso-osmolar with plasma. They are also highly viscous and, thus, have limited clinical usefulness

An older type of contrast medium, Thorotrast was based on thorium dioxide, but this was abandoned since it turned out to be carcinogenic.

Properties of Iodine Contrast Medium

Osmolality, viscosity, and iodine concentration are three physico-chemical properties that are inter-related with each other, and are also influenced vy the structure and size of the iodine-binding molecule. The expression of each property can vary greatly among contras medium
During the last decade, innovations in the field of X-ray contras medium have focused on manipulation of these properties. However, due to their relatedness, a change in one property may cause a change in another one, at times unfavourably so. For example, effort to decrease osmolality have led to iso-osmolar product however, it has been at the cost of an unwanted higher level of viscosity.
Osmolality is a count of the number of particles in a fluid sample. The unit for counting is the mole which is equal to 6.02 x 1023 particles (Avogadro's Number). Molarity is the number of particles of a particular substance in a volume of fluid (eg mmol/L) and molality is the number of particles disolved in a mass weight of fluid (mmol/kg). Osmolality is a count of the total number of osmotically active particles in a solution and is equal to the sum of the molalities of all the solutes present in that solution. For most biological systems the molarity and the molality of a solution are nearly exactly equal because the density of water is 1 kg/L. There is a slight difference between molality and molarity in plasma because of the non-aqueous components present such as proteins and lipids which make up about 6% of the total volume. Thus serum is only 94% water and the molality of a substance in serum is about 6% higher than its molarity.
Osmolality can be calculated by the following formulation :

Many of the side effects are due to the hyperosmolar solution being injected. i.e. they deliver more iodine atoms per molecule.

Side-effects of Iodine contrast medium (ICM)

The use of Iodine contrast medium (ICM) may cause untoward side ffects and manifestations of anaphylaxis. The symptoms include nousea, vomiting, widespread erythema, generalized heat sensation, headache, coryza or laryngeal edema, fever, sweating, asthenia, dizziness, pallor, dyspnoea and moderate hypotension. More severe reaction involving the cardiovasluar system such as peripheral vasodilation with pronounced hypotension, tachycardia, dyspnoea, agitation, cyanosis and loss of consciousness, may require emergency treatment. For these reason the use of contrast medium must be limited to cases for which the diagnostic procedure is definitely indicated
Side effects in association with the intravascular use of iodinated contrast medium are ussually of a mild to moderate and temporary nature, and are less frequent with non-ionic than with ionic preparations..
Adverse reactions to ICM are classified as idiosyncratic and nonidiosyncratic.The pathogenesis of such adverse reactions probably involves direct cellular effects; enzyme induction; and activation of the complement, fibrinolytic, kinin, and other systems.

Idiosyncratic reactions

Idiosyncratic reactions typically begin within 20 minutes of the ICM injection, independent of the dose that is administered. A severe idiosyncratic reaction can occur after an injection of less than 1 mL of a contrast agent.
Although reactions to ICM have the same manifestations as anaphylactic reactions, these are not true hypersensitivity reactions. Immunoglobulin E (IgE) antibodies are not involved. In addition, previous sensitization is not required, nor do these reactions consistently recur in a given patient. For these reasons, idiosyncratic reactions to ICM are called anaphylactic reactions.

Anaphylactoid reactions

Anaphylactoid reactions occur rarely (Karnegis and Heinz, 1979; Lasser et al, 1987; Greenberger and Patterson, 1988), but can occur in response to injected as well as oral and rectal contrast and even retrograde pyelography. They are similar in presentation to anaphylactic reactions, but are not caused by an IgE-mediated immune response. Patients with a history of contrast reactions, however, are at increased risk of anaphylactoid reactions (Greenberger and Patterson, 1988; Lang et al, 1993). Pretreatment with corticosteroids has been shown to decrease the incidence of adverse reactions (Lasser et al, 1988; Greenberger et al, 1985; Wittbrodt and Spinler, 1994). The symptoms of anaphylactic reaction can be classified as mild, moderate, and severe.

Mild symptoms
Mild symptoms include the following: scattered urticaria, which is the most commonly reported adverse reaction; pruritus; rhinorrhea; nausea, brief retching, and/or vomiting; diaphoresis; coughing; and dizziness. Patients with mild symptoms should be observed for the progression or evolution of a more severe reaction, which requires treatment.

Moderate symptoms
Moderate symptoms include the following: persistent vomiting; diffuse urticaria; headache; facial edema; laryngeal edema; mild bronchospasm or dyspnea; palpitations, tachycardia, or bradycardia; hypertension; and abdominal cramps.

Severe symptoms

Severe symptoms include the following: life-threatening arrhythmias (ie, ventricular tachycardia), hypotension, overt bronchospasm, laryngeal edema, pulmonary edema, seizures, syncope, and death.

Anaphylactoid reactions range from urticaria and itching, to bronchospasm and facial and laryngeal edema. For simple cases of urticaria and itching, Benadryl (diphenhydramine) oral or IV is appropriate. For more severe reactions, including bronchospasm and facial or neck edema, albuterol inhaler, or subcutaneous or IV epinephrine, plus diphenhydramine may be needed. If respiration is compromised, an airway must be established prior to medical management.

Nonidiosyncratic reactions

Nonidiosyncratic reactions include the following: bradycardia, hypotension, and vasovagal reactions; neuropathy; cardiovascular reactions; extravasation; and delayed reactions. Other nonidiosyncratic reactions include sensations of warmth, a metallic taste in the mouth, and nausea and vomiting.

Bradycardia, hypotension, and vasovagal reactions
By inducing heightened systemic parasympathetic activity, ICM can precipitate bradycardia (eg, decreased discharge rate of the sinoatrial node, delayed atrioventricular nodal conduction) and peripheral vasodilatation. The end result is systemic hypotension with bradycardia. This may be accompanied by other autonomic manifestations, including nausea, vomiting, diaphoresis, sphincter dysfunction, and mental status changes. Untreated, these effects can lead to cardiovascular collapse and death. Some vasovagal reactions may be a result of coexisting circumstances such as emotion, apprehension, pain, and abdominal compression, rather than ICM administration.

Cardiovascular reactions
ICM can cause hypotension and bradycardia. Vasovagal reactions, a direct negative inotropic effect on the myocardium, and peripheral vasodilatation probably contribute to these effects. The latter 2 effects may represent the actions of cardioactive and vasoactive substances that are released after the anaphylactic reaction to the ICM. This effect is generally self-limiting, but it can also be an indicator of a more severe, evolving reaction.
ICM can lower the ventricular arrhythmia threshold and precipitate cardiac arrhythmias and cardiac arrest. Fluid shifts due to an infusion of hyperosmolar intravascular fluid can produce an intravascular hypervolemic state, systemic hypertension, and pulmonary edema. Also, ICM can precipitate angina.
The similarity of the cardiovascular and anaphylactic reactions to ICM can create confusion in identifying the true nature of the type and severity of an adverse reaction; this confusion can lead to the overtreatment or undertreatment of symptoms.
Other nonidiosyncratic reactions include syncope; seizures; and the aggravation of underlying diseases, including pheochromocytomas, sickle cell anemia, hyperthyroidism, and myasthenia gravis.

Extravasation
Extravasation of ICM into soft tissues during an injection can lead to tissue damage as a result of direct toxicity of the contrast agent or pressure effects, such as compartment syndrome.

Delayed reactions
Delayed reactions become apparent at least 30 minutes after but within 7 days of the ICM injection. These reactions are identified in as many as 14-30% of patients after the injection of ionic monomers and in 8-10% of patients after the injection of nonionic monomers.
Common delayed reactions include the development of flulike symptoms, such as the following: fatigue, weakness, upper respiratory tract congestion, fevers, chills, nausea, vomiting, diarrhea, abdominal pain, pain in the injected extremity, rash, dizziness, and headache.
Less frequently reported manifestations are pruritus, parotitis, polyarthropathy, constipation, and depression.
These signs and symptoms almost always resolve spontaneously; usually, little or no treatment is required. Some delayed reactions may be coincidental.

Nephropathy
Contrast-induced nephropathy is defined as either a greater than 25% increase of serum creatinine or an absolute increase in serum creatinine of 0.5 mg/dL. Three factors have been associated with an increased risk of contrast-induced nephropathy: preexisting renal insufficiency (such as Creatinine clearance < 60 mL/min [1.00 mL/s] - calculator online calculator), preexisting diabetes, and reduced intravascular volume (McCullough, 1997; Scanlon et al, 1999).
The osmolality of the contrast mdium is believed to be of great importance in contrast-induced nephropathy. Ideally, the contrast agent should be isoosmolar to blood. Modern iodinated contrast medium are non-ionic, the older ionic types caused more adverse effects and are not used much anymore.
To minimize the risk for contrast-induced nephropathy, various actions can be taken if the patient has predisposing conditions. These have been reviewed in a meta-analysis.
1. The dose of contrast medium should be as low as possible, while still being able to perform the necessary examination.
2. Non-ionic contrast medium
3. Iso-osmolar, nonionic contrast medium. One randomized controlled trial found that an iso-osmolar, nonionic agent was superior to a non-ionic agent contrast media.
4. IV fluid hydration with saline. There is debate as to the most effective means of IV fluid hydration. One method is 1 mg/kg per hour for 6-12 hours before and after the the contrast.
5. IV fluid hydration with saline plus sodium bicarbonate. As an alternative to IV hydration with plain saline, administration of sodium bicarbonate 3 mL/kg per hour for 1 hour before , followed by 1 mL/kg per hour for 6 hours after contrast was found superior to plain saline on one randomized controlled trial. This was subsequently corroborated by a multi-center randomized controlled trial, which also demonstrated that IV hydration with sodium bicarbonate was superior to 0.9% normal saline. The renoprotective effects of bicarbonate are thought to be due to urinary alkalinization, which creates an environment less amenable to the formation of harmful free radicals.
6. N-acetylcysteine (NAC). NAC, 600 mg orally twice a day, on the day before and of the procedure if creatinine clearnace is estimated to be less than 60 mL/min [1.00 mL/s]). A randomized controlled trial found higher doses of NAC (1200-mg IV bolus and 1200 mg orally twice daily for 2 days) benefited (relative risk reduction of 74%) patients receiving coronary angioplasty with higher volumes of contrast . Some recent studies suggest that N-acetylcysteine protects the kidney from the toxic effects of the contrast agent (Gleeson & Bulugahapitiya 2004). This effect is, in any case, not overwhelming. Some researchers (e.g. Hoffmann et al 2004) even claim that the effect is due to interference with the creatinine laboratory test itself. This is supported by a lack of correlation between creatinine levels and cystatin C levels.
Other pharmacological agents, such as furosemide, mannitol, theophylline, aminophylline, dopamine, and atrial natriuretic peptide have been tried, but have either not had beneficial effects, or had detrimental effects (Solomon et al, 1994; Abizaid et al, 1999).

Read More..

DSA

DIGITAL SUBTRACTION ANGIOGRAPHY
(DSA)



. Angiograhpy is an X-ray examination with radio-opaque contrast medium in the vascular system to image the configuration of vascular circulation. In conventional angiography procedure, images are acquired by exposing an area of interest with time-controlled x-rays while injecting contrast medium into the blood vessels. The image obtained would also include all overlying structure besides the blood vessels in this area like bone and soft tissue.

Digital Subtraction Angiography (DSA) is a digital vascular imaging used in interventional radiology to clearly visualize blood vessels without the image result also include all overlying structure in this area like bone and soft tissue by subtracting a 'pre-contrast image' or the mask from later images, once the contrast medium has been introduced into a structure. So DSA combines the digitization of an image with subtraction technique. The most common use of DSA is with flouroscopic angiography as a subtitute for static serial angiographic films produced by a rapid film changer..

DSA was developed during the 1970s by groups at the University of Wisconsin, The University of Arizona, and the Kinderklinik at the University of Kiel. This work led to the development of commercial system that were introduced in 1980. Within the next few years many manufacturers of x-ray equipment introduced DSA product. After several years of rapid change , the system evolved to those available today. The primary changes since the introduction of DSA in 1980 include improved image quality, larger pixel matrices (up to 1024 x 1024), and fully digital system. Image Quality has improved for two reason : (1) the component parts (e.g., the image intensifier, television camera) have been improved, and (2)the component parts have been more effectively integrated into the system, since the early system were built using component part selected “off the shelf’ and they may or may not have been properly matched.


Equipment and Apparatus
An Image intensifier-television system (fluoroscopy) can be used to form images with little electrical interference, provide moderate resolution, and yield diagnostic quality images when combined with a high-speed image processor in DSA system. The television camera is focused onto the image-intensifier output phosphor and converts the light intensity into an electrical signal.
The image processor consist of a computer and image processing hardware. The computer control various components (e.g., memories, image processing hardware, and x-ray generator), and the image processing hardware gives the system the speed to do many images processing operation in real time.

DSA depends on the mating of high-resolution image-intensifier and television technology with computerized information manipulation and storage
X-ray Image Intensifiers
An image intensifier is a device that intensifies low light-level images to light levels that can be seen with the human eye or can be detected by a video camera. An image intensifier is a vacuum tube, having an input window on which inside surface a light sensitive layer called the photocathode has been deposited. Photons are absorbed in the photocathode and give rise to emission of electrons into the vacuum. These electrons are accelerated by an electric field to increase their energy and focus them. After multiplication by an MCP (multi channel plate) these electrons will finally be accelerated towards the anode screen. The anode screen contains a layer of phosphorescent material that is covered by a thin aluminium film. When striking the anode the energy of the electrons is converted into photons again. Because of the multiplication and increased energy of the electrons the output brightness is higher as compared to the original input light intensity.
Modern image intensifiers no longer use a separate fluorescent screen. Instead, a caesium iodide phosphor is deposited directly on the photocathode of the intensifier tube. On a typical general purpose system, the output image is approximately 105 times brighter than the input image. This brightness gain comprises a flux gain (amplification of photon number) and minification gain (concentration of photons from a large input screen onto a small output screen) each of approximately 100. This level of gain is sufficient that quantum noise, due to the limited number of x-ray photons, is a significant factor limiting image quality.
Image intensifiers are available with input diameters of up to 45 cm, and a resolution of approximately 2-3 line pairs mm-1.
Flat-panel detectors
The introduction of flat-panel detectors allows for the replacement of the image intensifier in fluoroscope design. Flat panel detectors offer increased sensitivity to X-rays, and therefore have the potential to reduce patient radiation dose. Temporal resolution is also improved over image intensifiers, reducing motion blurring. Contrast ratio is also improved over image intensifiers: flat-panel detectors are linear over a very wide latitude, whereas image intensifiers have a maximum contrast ratio of about 35:1. Spatial resolution is approximately equal, although an image intensifier operating in 'magnification' mode may be slightly better than a flat panel.
Flat panel detectors are considerably more expensive to purchase and repair than image intensifiers, so their uptake is primarily in specialties that require high-speed imaging, e.g., vascular imaging and cardiac catheterization.

Fluoroscopy
Fluoroscopy is a dinamic radiographic examination, compared to diagnostic radiography, which is static in character. Fluoroscopy is an imaging technique to obtain real-time images of the internal structures of a patient through the use of a fluoroscope In its simplest form, a fluoroscope consists of an x-ray source and fluorescent screen between which a patient is placed. However, modern fluoroscopes couple the screen to an x-ray image intensifier and CCD video camera allowing the images to be played and recorded on a monitor. The use of x-rays, a form of ionizing radiation, requires that the potential risks from a procedure be carefully balanced with the benefits of the procedure to the patient. While physicians always try to use low dose rates during fluoroscopy procedures, the length of a typical procedure often results in a relatively high absorbed dose to the patient. Recent advances include the digitization of the images captured and flat-panel detector systems which reduce the radiation dose to the patient still further.
The Basic Principles of DSA

Under the flouroskopy control the patient is injected with contrast medium direcly to blood vessel or through a catheter and the blood vessels in the anatomical region of interest are then highlighted on a sequence of radiographical images.
In order to clearly visualize blood vessels in a bony or dense soft tissue environment., first a mask image is acquired. The mask image is simply an image of the same area before the contrast is administered. The radiological equipment used to capture this is usuallly an image intensifier, which will then keep producing images of the same area at a set rate (1 - 6 frames per second), taking all subsequent images away from the original 'mask' image. The radiologist controls how much contrast media is injected and for how long. Smaller structures require less contrast to fill the vessel than others. Images produced appear with a very pale grey background, which produces a high contrast to the blood vessels, which appear a very dark grey.The images are all produced in real time by the computer, as the contrast is injected into the blood vessels.
Radiation Exposure
Radiation exposure from X-ray angiography procedures are relatively high when compared with conventional radiographic procedures. Angiography procedures can generate highly localized doses to the skin of patients, which may be above the threshold for deterministic injuries as well as carrying an increased risk of cancer induction. Staff doses are linked to patient doses because they result from secondary scattered radiation arising mainly from the patient. Staff may also be exposed to primary leakage radiation that is generated at the X-ray target and which has penetrated the leaded X-ray tube housing. Without due care and understanding, multiple procedures could lead to serious injury. This highlights the need to optimize the imaging equipment used during angiography and to properly use any dose saving techniques. The training of staff working in the vicinity of X-ray equipment is also of paramount importance. Radiation exposure to patients and laboratory staff has been recognized as a necessary hazard in angiography.
Procedures that utilize ionizing radiation should be performed in accordance with the As Low As Reasonably Achievable (ALARA) philosophy. Thus, personels ordering and performing angiography should be very familiar with the dosage of radiation from angiography procedures and ways in which dose can be minimized.
The Ionising Radiations Regulations 1999 require that measures are taken to minimize the radiation dose received by those working in a radiation environment. This is normally achieved by ensuring that those persons working within "Controlled Areas" are adequately trained in matters relating to radiation protection. For some of these groups (e.g. radiologist, cardiologists and radiographers), training in such matters forms a significant part of their basic training.
Specific points to impart are:
1. Digital acquisitions lead to much higher doses that fluoroscopy.
2. When imaging oblique angles, the scatter on the X-ray tube side is greater than that on the intensifier side.
3. Lead protection must be carefully placed to ensure continuity of protection.
4. Distance from the patient is an effective method of dose reduction.


Read More..

COMPUTERS FUNDAMENTALS AND APPLICATIONS IN MEDICAL IMAGING

COMPUTERS FUNDAMENTALS AND APPLICATIONS IN MEDICAL IMAGING
By ; Sumarsono


The Progressive and evolutionary growth in medicine would not be possible without the aid of computers. As a result of the applications of the computer in the storage, analysis, and manipulation of data, pathologic conditions can be diagnosed more accurately and earlier in the disease process, resulting in an increased patient cure rate. The increasing use of computers in medical science clearly demonstrates the need for qualified personnel who can understand and operate computerized equipment.



History of computer
History of computer could be traced back to the effort of man to count large numbers. This process of counting of large numbers generated various systems of numeration like Babylonian system of numeration, Greek system of numeration, Roman system of numeration and Indian system of numeration. Out of these the Indian system of numeration has been accepted universally. It is the basis of modern decimal system of numeration (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). Later you will know how the computer solves all calculations based on decimal system. But you will be surprised to know that the computer does not understand the decimal system and uses binary system of numeration for processing.
1. Calculating Machines
It took over generations for early man to build mechanical devices for counting large numbers. The first calculating device called ABACUS was developed by the Egyptian and Chinese people. The word ABACUS means calculating board. It consisted of sticks in horizontal positions on which were inserted sets of pebbles. A modern form of ABACUS is given in Fig. 1.2. It has a number of horizontal bars each having ten beads. Horizontal bars represent units, tens, hundreds, etc.
2. Napier’s bones
English mathematician John Napier built a mechanical device for the purpose of multiplication in 1617 A D. The device was known as Napier’s bones.
3. Slide Rule
English mathematician Edmund Gunter developed the slide rule. This machine could perform operations like addition,. subtraction, multiplication, and division. It was widely used in Europe in 16th century.
4. Pascal's Adding and Subtractory Machine
You might have heard the name of Blaise Pascal. He developed a machine at the age of 19 that could add and subtract. The machine consisted of wheels, gears and cylinders.
5. Leibniz’s Multiplication and Dividing Machine
The German philosopher and mathematician Gottfried Leibniz built around 1673 a mechanical device that could both multiply and divide.
6. Babbage’s Analytical Engine
It was in the year 1823 that a famous English man Charles Babbage built a mechanical machine to do complex mathematical calculations. It was called difference engine. Later he developed a general-purpose calculating machine called analytical engine. Charles Babbage is called the father of computer.
7. Mechanical and Electrical Calculator
In the beginning of 19th century. The mechanical calculator was developed to perform all sorts of mathematical calculations. Up to the 1960s it was widely used. Later the rotating part of mechanical calculator was replaced by electric motor. So it was called the electrical calculator.

8. Modern Electronic Calculator
The electronic calculator used in 1960 s was run with electron tubes, which was quite bulky. Later it was replaced with transistors and as a result the size of calculators became too small. The modern electronic calculator can compute all kinds of mathematical computations and mathematical functions. It can also be used to store some data permanently. Some calculators have in-built programs to perform some complicated calculations.
Computers that used vacuum tubess as their electronic elements were in use throughout the 1950s. Vacuum tube electronics were largely replaced in the 1960s by transistor-based electronics, which are smaller, faster, cheaper to produce, require less power, and are more reliable. In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the 1980s, computers became sufficiently small and cheap to replace simple mechanical controls in domestic appliances such as washing machines. The 1980s also witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.
Functional Components of a Computer
The term hardware is used to describe the functional equipment components of a computer and is everything concerning the computer that is visible. Software designates the parts of the computer system that are invisible, such as the machine language and the programs. A computer program may run from just a few instructions to many millions of instructions. A typical modern computer can execute billions of instructions per second (gigahertz or GHz) and rarely make a mistake over many years of operation. Large computer programs comprising several million instructions may take teams of programmers years to write, thus the probability of the entire program having been written without error is highly unlikely.
The computer hardware consist of four functionally independent components : the arithmetic and logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by busses, often made of groups of wires.
Input devices is the process of entering data and programs in to the computer system. Computer is an electronic machine like any other machine which takes as inputs raw data and performs some processing giving out processed data. Therefore, the input unit takes data from user to the computer in an organized manner for processing.

The control unit (often called a control system or central controller) directs the various components of a computer. It reads and interprets (decodes) instructions in the program one by one. The control system decodes each instruction and turns it into a series of control signals that operate the other parts of the computer.Control systems in advanced computers may change the order of some instructions so as to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from. The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:
1. Read the code for the next instruction from the cell indicated by the program counter.
2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register or perhaps an output device.
8. Jump back to step (1).

The process of saving data and instructions permanently is known as storage. Data has to be fed into the system before the actual processing starts. It is because the processing speed of Central Processing Unit (CPU) is so fast that the data has to be provided to CPU with the same speed. Therefore the data is first stored in the storage unit for faster access and processing. This storage unit or the primary storage of the computer system is designed to do the above functionality. It provides space for storing data and instructions.
Computer main memory comes in two principal varieties: random access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM is erased when the power to the computer is turned off while ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the software required to perform the task may be stored in ROM. Software that is stored in ROM is often called firmware because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM by retaining data when turned off but being rewritable like RAM. However, flash memory is typically much slower than conventional ROM and RAM so its use is restricted to applications where high speeds are not required.In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.
The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers—albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other .
Output: This is the process of producing results from the data for getting useful information. Similarly the output produced by the computer after processing must also be kept somewhere inside the computer before being given to in human readable form. Again the output is also stored inside the computer for further processing.
Binary System
In computer’s memory both programs and data are stored in the binary form. The binary system has only two values 0 and 1. As human beings we all understand decimal system but the computer can only understand binary system. It is because a large number of integrated circuits inside the computer can be considered as switches, which can be made ON, or OFF. If a switch is ON it is considered 1 and if it is OFF it is 0. A number of switches in different states will give a message like : 110101....10. So the computer takes input in the form of 0 and 1 and gives output in the form 0 and 1 only. Every number in binary system can be converted to decimal system and vice versa; for example, 1010 meaning decimal 10. Therefore it is the computer that takes information or data in decimal form from user, convert it in to binary form, process it producing output in binary form and again convert the output to decimal form.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers; either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory as long as it can be somehow represented in numerical form. Modern computers have billions or even trillions of bytes of memory.
Applications in Medical Imaging
Analog to Digital Converters

Devices that produce images of real objects, such as patients, produce an analog rather than digital signal. The electrical signal being emitted from the output phosphor of an image intensifier on a fluoroscopic unit, the scintillation crystal of a nuclear medicine detector of a computed tomography (CT) unit, or the piezoelectric crystal of an ultrasound machine is in analog form with a variance in voltage. For these signals to be read as data by the computer, they must by digitized an converted into binary numbers system. The peripheral device that performs this task is the analog-to-digital converter, which transforms the sine wave into discrete increments with binary numbers assigned to each increment. The assignment of the binary numbers depends on output voltage, which in turn represents the degree of attenuation of the various tissue densities within the patient. The basic component of an analog digital converter (ADC) is the “comparator” that outputs a “1” when the voltage equals or exceeds a precise analog voltage and a “0” if the voltage does not equal or exceed this predetermined level. The most significant parameters of an ADC are (1) digitization depth (the number of bits in the resultant binary number); (2) dynamic range (the range of voltage or input signals the result in a digital output); (3) digitization rate (the rate that digitization takes place). Achieving the optimal digitization depth is necessary for resolution quality and flexibility in image manipulation. Digitization depth and dynamic range are analogous and latitude in a radiograph.

To produce a video image, the field size of the image is divided into many cubes or matrix, with each cube assigned a binary number proportional to the degree of the attenuation of the x-ray beam or intensity of the incoming signal. The individual three-dimensional cubes with length, width, and depth are called voxel (volume element), with the degree of attenuation or intensity of the incoming voltage determining their composition and thickness.

Because the technology for displaying three-dimensional objects has not been fully developed, a two-dimensional square or pixel (picture element) represent the voxel on the television display monitor or cathode ray tube. The matrix is an array of pixel arranged in two dimension, length and width, or in row and columns. The more pixel contained in a image, the larger matrix becomes, with the resolution quality of the image improving. For instance, a matrix containing 256 x 256 pixels has atotal of 65,536 pixel or pieces of data; whereas a matrix of 512 x 512 pixels contains 262,144 pieces of data. One should not confuse field size with matrix size. The larger matrix also allows for more manipulation of the data or the image displayed on the television monitor and is very beneficial and useful in the imaging modalities, such as digital substraction fluoroscopy, CT, nuclear medicine, and ultrasound.

DIGITAL IMAGING PROCESSING

Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subfield of digital signal processing, digital image processing has many advantages over analog image processing; it allows a much wider range of algorithms to be applied to the input data, and can avoid problems such as the build-up of noise and signal distortion during processing

Within the computer system, digital images are represented as groups of numbers. Therefore these numbers can be changed through applying mathematic operations, and the result the images is altered, this important concept has provide extraordinary control in contrast enhancement, image enhancement, subtraction techniques, and magnification without losing the original image data.
Contrast Enhancement
Window width encompasses the range densities within an image. A narrow window is comparable to the use of a high-contrast radiographic film. As a result, image contrast is increased. Increasing the width of the window allows more of the gray scale to be visualized or more latitude in the densities of the image visualized. A narrow window is valuable when subtle differences in subject density need to be better visualized. However, the use of a narrow window increases image noise, and densities outside of the narrow window are not visualized.

Image Enhancement or Reconstruction
Image enhancement or reconstruction is accomplished by the use of digital processing of filtering, which can be defined as accenting or attenuating selected frequencies in the image. The filtration methods used in the medical imaging are classified as (1) convolution; (2) low-pass-filtering or smoothing; (3) band-pass filtering; and (4) high-pass filtering, or edge’ enhancement. The background intensities within a medical image consist mainly of low spatial frequencies, whereas an edge, typifying a sudden change in intensities, is composed of mainly high spatial frequencies for example, the bone-to-air interface or the skull when using computed tomography. Spatial noise originating from within the computer system is usually high spatial frequencies. Filtration reduces the amount of high spatial frequencies inherent in the object. The percentage of transmission versus spatial frequency, if ploted on a graph, called modulation transfer function (MTF). One effort of continuing research in medical imaging is a develop systems with higher modulation transfer function.

Convolution is accomplished automatically with computer system that are equipped with fast fourier transforms, the convolutions process is implemented by placing a filter mask array, or matrix, over the image array, or matrix, in the memory. A filter mask is a square array or section with numbers, usually consisting of an area of 3 x 3 elements. The size of the filter mask is determined by the manufacturer of the equipment, although larger mask are not often used because they take longer to process. The convolution filtering process can be conceptualized as placing the filter mask over an area of the image marix, multipliying each element value direcly beneath it, obtaining a sum of these values, and then placing that sum within the output image in the exact location as it was in the original image.

Subtraction Technique
The advantages of using digital substraction include the ability to visualize small anatomic structures and to perform the examination via venous injection of contras media. The most common digital subtraction technique are temporal subtraction and dual energy subtraction. Hybrid subtraction is a combination of these two methods.

Magnification
Magnification, sometimes called zooming, is a process of selecting an area of interest and copying each pixel within the area an integer number of times. Large magnifications may give the image an appearance of being constructed of blocks. To provide a more diagnostic image, a smoothing or low-pass filter operation can be done to smooth out the distinct intensities between the blocks.

Three-Dimensional Image
When three dimensional imaging was first introduced, the images were less than optimal because of the resolution being too low to adequately visualize anatomic structures deeper within the body. The images often adequately displayed only the more dense structures closer to the body surface, or surface boundaries, which appeared blocky and jagged; therefore, the soft tissue or less dense structures were not visualized. By using fast Fourier transforms (3DFT), new algorithms for mathematical calculations, and development of computers with faster processing time, three dimensional images have become smooth, sharply focused, and realistically shaded to demonstrate soft tissue. The ability to demonstrate soft tissue in three dimensional imaging is referred to as a volumetric rendering technique.

Volumetric rendering is a computer program whereby “stack” of sequential images are processed as a volume with the gray scale intensity information in each pixel being interpolated in the z axis (perpendicular to to the x and y axes). Interpolated is necessary because the field of view of the scan (the x and y axes). is not the same as the z axis because of interscan spacing. Following this computer process, new data are generated by interpolation, resulting in each new voxel having all the same dimension. The volumetric rendering technique enables definitions of the object’s thickness, a crucial factor in three dimensional imaging and in visualizing subtle densities.








Read More..

THE BASIC PRINCIPLES OF ULTRASONOGRAPHY (USG)

THE BASIC PRINCIPLES OF ULTRASONOGRAPHY (USG)
By : Sumarsono


Diagnostic Ultrasonography sometimes called “diagnostic medical sonography” has become a clinically valuable imaging technique. It differs from diagnostic radiology in that it uses nonionizing, high-frequency sound waves to generate an image of a particular structure. Ultrasound is employed in the visualization of muscles, tendons, and many internal organs, their size, structure and any pathological lesions with real time tomographic images.. Blood velocities may be calculated in vascular and cardiac structures with the Doppler technique.The Ultrasound equipment may be easily moved into the operating room, special care nursery, or intencive care unit or may be manually transported by means of a mobile van service to provide ultrasound service for smaller hospital and clinics. Ultrasound is cost-effective compared with computed tomography (CT), magnetic resonance imaging (MRI), or angiography.Further development in high-frequency, millimeter size tranducers mounted on the tip of an angiographic catheter (IVUS; Intra Vascular Ultrasound) have great potential.


History

Physical Principles
Ultrasound is sound waves greater than 20,000 Hertz (greater than the upper limit of human hearing). The audible sound frequencies are below 15 000 to 20 000 Hz, while frequency ranges used in medical ultrasound imaging are 2 -15 MHz. Audible sound travels around corners, the human can hear sounds around a corner (sound diffraction). With higher frequencies the sound tend to move more in straight lines like electromagnetic beams, and will be reflected like light beams. They will be reflected by much smaller objects (because of sorter wavelengths), and does not propagate easily in gaseous media. At higher frequencies the ultrasound behaves more like electromagnetic radiation. The wavelength is inversely related to the frequency f by the sound velocity c:
c = λf
Meaning that the velocity equals the wavelength times the number of oscillations per second, and thus:

λ =c/f

The sound velocity i a given material is constant (at a given temperature), but varies in different materials :



The speed of sound is different in different materials, and is dependent on the acoustical impedance of the material. However, the sonographic instrument assumes that the acoustic velocity is constant at 1540 m/s. An effect of this assumption is that in a real body with non-uniform tissues, the beam becomes somewhat de-focused and image resolution is reduced.

Basically, all ultrasound imaging is performed by emitting a pulse, which is partly reflected from a boundary between two tissue structures, and partially transmitted. The reflection depends on the difference in impedance of the two tissues.
The ratio of the amplitude (energy) of the reflected pulse and the incident is called the reflection coefficient. The ratio of the amplitude of the incident pulse and the transmitted pulse is called the transmission coefficient. Both are dependent on the differences in acoustic impedance of the two materials. The acoustic impedance of a medium is the speed of sound in the material × the density:
Z = c ×

The reflecting structures does not only reflect directly back to the transmitter, but scatters the ultrasound in more directions. Thus, the reflecting structures are usually termed scatterers.


The time lag, , between emitting and receiving a pulse is the time it takes for sound to travel the distance to the scatterer and back, i.e. twice the range, r, to the scatterer at the speed of sound, c, in the tissue. Thus:

r = cτ / 2

The pulse is thus emitted, and the system is set to await the reflected signals, calculating the depth of the scatterer on the basis of the time from emission to reception of the signal. The total time for awaiting the refelcted ultrasound is determined by the preset depth desired in the image.
Piezoelectric effect
Ultrasound is generated by piezoelectric crystals that vibrates when compressed and decompressed by an alternating current applied across the crystal, the same crystals can act as receivers of reflected ultrasound, the vibrations induced by the ultrasound pulse .
Piezoelectric effect , voltage produced between surfaces of a solid dielectric (nonconducting substance) when a mechanical stress is applied to it. A small current may be produced as well. The effect, discovered by Pierre Curie in 1883, is exhibited by certain crystals, e.g., quartz and Rochelle salt, and ceramic materials. When a voltage is applied across certain surfaces of a solid that exhibits the piezoelectric effect, the solid undergoes a mechanical distortion. Piezoelectric materials are used in transducers , e.g., phonograph cartridges, microphones, and strain gauges, which produce an electrical output from a mechanical input, and in earphones and ultrasonic radiators, which produce a mechanical output from an electrical input. Piezoelectric solids typically resonate within narrowly defined frequency ranges; when suitably mounted they can be used in electric circuits as components of highly selective filters or as frequency-control devices for very stable oscillators .
Transducer
A sound wave is typically produced by a piezoelectric transducer encased in a probe. Strong, short electrical pulses from the ultrasound machine make the transducer ring at the desired frequency.The sound is focused either by the shape of the transducer, a lens in front of the transducer, or a complex set of control pulses from the ultrasound scanner machine. This focusing produces an arc-shaped sound wave from the face of the transducer. The wave travels into the body and comes into focus at a desired depth. A probe containing one or more acoustic transducers to send pulses of sound into the body. Whenever a sound wave encounters a material with a different density (acoustical impedance), part of the sound wave is reflected back to the probe and is detected as an echo. The time it takes for the echo to travel back to the probe is measured and used to calculate the depth of the tissue interface causing the echo. The greater the difference between acoustic impedances, the larger the echo is. If the pulse hits gases or solids, the density difference is so great that most of the acoustic energy is reflected and it becomes impossible to see deeper.
Older technology transducers focus their beam with physical lenses. Newer technology transducers use phased array techniques to enable the sonographic machine to change the direction and depth of focus. Almost all piezoelectric transducers are made of ceramic.
Materials on the face of the transducer enable the sound to be transmitted efficiently into the body (usually seeming to be a rubbery coating, a form of impedance matching). In addition, a water-based gel is placed between the patient's skin and the probe.
The sound wave is partially reflected from the layers between different tissues. Specifically, sound is reflected anywhere there are density changes in the body: e.g. blood cells in blood plasma, small structures in organs, etc. Some of the reflections return to the transducer.



To generate a 2D-image, the ultrasonic beam is swept. A transducer may be swept mechanically by rotating or swinging. Or a 1D phased array transducer may be use to sweep the beam electronically. The received data is processed and used to construct the image. The image is then a 2D representation of the slice into the body.
3D images can be generated by acquiring a series of adjacent 2D images. Commonly a specialised probe that mechanically scans a conventional 2D-image transducer is used. However, since the mechanical scanning is slow, it is difficult to make 3D images of moving tissues. Recently, 2D phased array transducers that can sweep the beam in 3D have been developed. These can image faster and can even be used to make live 3D images of a beating heart.
Doppler ultrasonography is used to study blood flow and muscle motion. The different detected speeds are represented in color for ease of interpretation, for example leaky heart valves: the leak shows up as a flash of unique color. Colors may alternatively be used to represent the amplitudes of the received echoes.
Display Modes
Four different modes of ultrasound are used in medical imaging. These are:
• A-mode (amplitude modulation) : A-mode is the simplest type of ultrasound. The received energy at a certain time, i.e. from a certain depth, can be displayed as energy amplitude. The greater the reflection at the interface, the larger the signal amplitude will appear on the A-mode screen.


• B-mode (Brightness) : The amplitude can also be displayed as the brightness of the certain point representing the scatterer. In B-mode ultrasound, a linear array of transducers simultaneously scans a plane through the body that can be viewed as a two-dimensional image on screen.

• M-mode (motion mode) : if some of the scatterers are moving, the motion curve can be traced In m-mode a rapid sequence of B-mode scans whose images follow each other in sequence on screen enables to see and measure range of motion, as the organ boundaries that produce reflections move relative to the probe.

• D Mode or Doppler mode: This mode makes use of the Doppler effect. The Doppler information is displayed graphically using spectral Doppler, or as an image using color Doppler (directional Doppler) or power Doppler (non directional Doppler). This Doppler shift falls in the audible range and is often presented audibly using stereo speakers: this produces a very distinctive, although synthetic, pulsing sound.

Artifacts
Artifacts is Portions of the display which are not a “true” representation of the tissue imaged. Medical Diagnostic Ultrasound imaging utilizes certain artifacts to characterize tissue.The ability to differentiate solid vs. cystic tissue is the hallmark of ultrasound imaging. Acoustic shadowing and acoustic enhancement are the two artifacts that provide the most useful diagnostic information. Acoustic shadowing diminished sound or loss of sound posterior to a strongly reflecting (e.g.,large calcifications, bone) or strongly attenuating structure (solid tissue, significantly dense or malignant masses).

Acoustic enhancement is the increased of transmission of the sound wave posterior to a weakly attenuating structure (e.g., simple cysts or weakly attenuating masses). Gain curve expected a certain loss or attenuating with depth of travel.

Diagnostic applications
A general-purpose sonographic machine may be able to be used for most imaging purposes. Usually specialty applications may be served only by use of a specialty transducer. The dynamic nature of many studies generally requires specialized features in a sonographic machine for it to be effective; such as endovaginal, endorectal, or transesophageal transducers.
Obstetrical ultrasound is commonly used during pregnancy to check on the development of the fetus. In a pelvic sonogram, organs of the pelvic region are imaged. This includes the uterus and ovaries or urinary bladder. Men are sometimes given a pelvic sonogram to check on the health of their bladder and prostate. There are two methods of performing a pelvic sonography - externally or internally. The internal pelvic sonogram is performed either transvaginally (in a woman) or transrectally (in a man). Sonographic imaging of the pelvic floor can produce important diagnostic information regarding the precise relationship of abnormal structures with other pelvic organs and it represents a useful hint to treat patients with symptoms related to pelvic prolapse, double incontinence and obstructed defecation.
In abdominal sonography, the solid organs of the abdomen such as the pancreas, aorta, inferior vena cava, liver, gall bladder, bile ducts, kidneys, and spleen are imaged. Sound waves are blocked by gas in the bowel, therefore there are limited diagnostic capabilities in this area. The appendix can sometimes be seen when inflamed eg: appendicitis





Read More..

THE BASIC PRINCIPLES OF MAGNETIC RESONANCE IMAGING (MRI)

THE BASIC PRINCIPLES OF MAGNETIC RESONANCE IMAGING
(MRI

By : Sumarsono


History
Felix Bloch and Edward Purcell, both of whom were awarded the Nobel Prize in 1952, discovered the magnetic resonance phenomenon independently in 1946. In the period between 1950 and 1970, NMR was developed and used for chemical and physical molecular analysis.
In 1971 Raymond Damadian showed that the nuclear magnetic relaxation times of tissues and tumors differed, thus motivating scientists to consider magnetic resonance for the detection of disease. Magnetic resonance imaging was first demonstrated on small test tube samples that same year by Paul Lauterbur. He used a back projection technique similar to that used in CT. In 1975 Richard Ernst proposed magnetic resonance imaging using phase and frequency encoding, and the Fourier Transform This technique is the basis of current MRI techniques. A few years later, in 1977, Raymond Damadian demonstrated MRI called field-focusing nuclear magnetic resonance. In this same year, Peter Mansfield developed the echo-planar imaging (EPI) technique. Edelstein and coworkers demonstrated imaging of the body using Ernst's technique in 1980. A single image could be acquired in approximately five minutes by this technique. By 1986, the imaging time was reduced to about five seconds, without sacrificing too much image quality. The same year people were developing the NMR microscope, which allowed approximately 10 mm resolution on approximately one cm samples. In 1987 echo-planar imaging was used to perform real-time movie imaging of a single cardiac cycle. In this same year Charles Dumoulin was perfecting magnetic resonance angiography (MRA), which allowed imaging of flowing blood without the use of contrast agents.



In 1991, Richard Ernst was rewarded for his achievements in pulsed Fourier Transform NMR and MRI with the Nobel Prize in Chemistry. In 1992 functional MRI (fMRI) was developed. This technique allows the mapping of the function of the various regions of the human brain. Five years earlier many clinicians thought echo-planar imaging's primary applications was to be in real-time cardiac imaging. The development of fMRI opened up a new application for EPI in mapping the regions of the brain responsible for thought and motor control. In 1994, researchers at the State University of New York at Stony Brook and Princeton University demonstrated the imaging of hyperpolarized 129Xe gas for respiration studies. In 2003, Paul C. Lauterbur of the University of Illinois and Sir Peter Mansfield of the University of Nottingham were awarded the Nobel Prize in Medicine for their discoveries concerning magnetic resonance imaging. MRI is clearly a young, but growing science.
The Basic Physics
The human body is primarily fat and water. Fat and water have many hydrogen atoms which make the human body approximately 63% hydrogen atoms. Hydrogen nuclei have an NMR signal. For these reasons magnetic resonance imaging primarily images the NMR signal from the hydrogen nuclei. Each voxel of an image of the human body contains one or more tissues. Within each cell there are water molecules. Each water molecule has one oxygen and two hydrogen atoms. One hydrogen atom contain one proton (single proton).

THE BASIC PRINCIPLES OF MAGNETIC RESONANCE IMAGING
(MRI)
Spin

The proton possesses a property called spin which: it can be thought of as a small magnetic field, and will cause the nucleus to produce an NMR signal.

Spin is a fundamental property of nature like electrical charge or mass. Spin comes in multiples of 1/2 and can be + or -. Protons, electrons, and neutrons possess spin. Individual unpaired electrons, protons, and neutrons each possesses a spin of 1/2. In the deuterium atom ( 2H ), with one unpaired electron, one unpaired proton, and one unpaired neutron, the total electronic spin = 1/2 and the total nuclear spin = 1. Two or more particles with spins having opposite signs can pair up to eliminate the observable manifestations of spin. An example is helium. In nuclear magnetic resonance, it is unpaired nuclear spins that are of importance. Spin of proton like a magnetic moment vector, causing the proton to behave like a tiny magnet with a north (N) and south pole (S).
A collection of 1H nuclei (spinning protons) in the absence of an externally applied magnetic field. The magnetic moments have random orientations.When the proton is placed in an external magnetic field B0, the spin vector of the particle aligns itself with the external field, one of two orientations with respect to B0 (denoted parallel and anti-parallel). Protons aligned in the parallel orientation are said to be in a low energy state. Protons in the anti-parallel orientation are said to be in a high-energy state.

The energy differential between the high and low energy states is proportional to the strength of the externally applied magnetic field B0. Also related to the strength of B0 is the number of spins in the low energy state. The higher the B0, the greater the number of spins aligned in the low-energy state. The number of spins in the low energy state in excess of the number in the high-energy state is referred to as the spin excess. The magnetic moments of these excess spins add to form the net magnetization and thus the tissue placed in the magnetic field becomes magnetized. The net magnetization is also represented as a vector quantity. As previously mentioned, a larger B0 will produce a greater spin excess. Therefore, a larger B0 will produce a larger net magnetization vector.
Larmor’s Formula
The spin vektor do not line up precisely with external magnetic field but at an angle to the field, and they rotate about the direction of the magnetic field similar to the wobbling motion of a spinning top. This wobbling motion is called precession and occurs at a specific frequency (rate) for a given atom’s nucleus in a magnetic field of a specific strenght.
The Larmor equation expresses the relationship between the strength of a magnetic field, B0, and the precessional frequency, ωo, of an individual spin. From The Larmor Equation that the precessional frequency is equal to the strength of the external static magnetic field (B0) multiplied by the gyromagnetic ratio (γ). Increasing B0 will increase the precessional frequency and conversely, decreasing B0 will decrease the precessional frequency. This is analogous to a spinning top. It will precess due to the force of gravity. If the gravity were to be decreased (as it is on the moon), then the top would precess slower.
Wo = γ BO
The proportionality constant to the left of B0 is known as the gyromagnetic ratio of the nucleus. The precessional frequency, ωo, is also known as the Larmor frequency. For a hydrogen nucleus, the gyromagnetic ratio is 4257 Hz/Gauss. Thus at 1.5 Tesla (15,000 Gauss), ωo = 63.855 Megahertz.



Fourier Transformation
To understand how an image is constructed in MRI it is first instructive to take a look at Fourier Transformation (FT). FT permits signal to be decomposed into a sum of sine waves each of different frequency, phases and amplitudes.

S(t) = a0 + a1sin(1t + 1) + a2sin(2t + 2) + ...
The FT of the signal in the time domain can be represented in the equivalent frequency domain by a series of peaks of various amplitudes. In MRI the signal is spatially encoded by changes of phase/frequency which is then unravelled by performing a 2D FT to identify pixel intensities across the image.

MRI Signal Production
Hydrogen exists in many molecules in the body. Water (consisting of two hydrogen atoms and one oxygen) comprises up to 70% of body weight. Hydrogen is also present in fat and most other tissues in the body. The varying molecular structures and the amount of hydrogen in various tissues effect how the protons behave in the external field. As an example, because of the total amount of hydrogen in water, it has one of the strongest net magnetization vectors relative to other tissues. Other structures and tissues within the body have less hydrogen concentration and become magnetized to a lesser extent. In other words, their net magnetization is less intense.
The amount of mobile hydrogen protons that a given tissue possesses, relative to water (specifically CSF), is referred to as its spin density (proton density). This is the basis with which we begin to produce images using Magnetic Resonance. The hydrogen nucleus contains one proton and possesses a significant magnetic moment. In addition, hydrogen is very abundant in the human body. By placing the patient in a large external magnetic field, will magnetize the tissue (hydrogen), preparing it for the MR imaging process.
A radio wave is actually an oscillating electromagnetic field. It is oriented perpendicular to the main magnetic field (B0). If a pulse of RF energy apply into the tissue at the Larmor frequency, the individual spins begin to precess in phase, as will the net magnetization vector. As the RF pulse continues, some of the spins in the lower energy state absorb energy from the RF field and make a transition into the higher energy state. This has the effect of tipping the net magnetization toward the transverse plane.


The angle through which M has rotated away from the z-axis is known as the flip angle. The strength and duration of B1 determine the amount of energy available to achieve spin transitions between parallel and anti-parallel states. Thus, the flip angle is proportional to the strength and duration of B1. Pulses of 90 degrees is applied to produce a 90-degree flip of the net magnetization. A pulse of 180 degrees rotates M into a position directly opposite to B0, with greater numbers of spins adopting anti-parallel (rather than parallel) states. If the B1 field is applied indefinitely, M tilts away from the z-axis, through the x-y plane towards the negative z direction, and finally back towards the x-y plane and z-axis (where the process begins again).
As the magnetization (now referred to as transverse magnetization, or Mxy) precesses through the receiver coil, a current or signal is induced in the coil. The principle behind this signal induction is Faradays Law of Induction. This states that if a magnetic field is moved through a conductor, a current will be produced in the conductor. the increasing the size of the magnetic field, or the speed with which it moves, will increase the size of the signal (current) induced in the conductor .
In order to detect the signal produced in the receiver coil, the transmitter must be turned off. When the RF pulse is discontinued, the signal in the coil begins at a given amplitude (determined by the amount of magnetization precesssing in the transverse plane and the precessional frequency) and fades rapidly away. This initial signal is referred to as the Free Induction Decay or FID

The return of M to its equilibrium state (the direction of the z-axis) is known as relaxation. The signal fades as the individual spins contributing to the net magnetization loose their phase coherence, making the vector sum equal to zero. Flipped nuclei start off all spinning together known as T2 relaxation, but quickly become incoherent (out of phase). The FID decays at a rate given by disturbances in magnetic field (magnetic susceptibility) increase the rate of spin coherence T2 relaxation known as T2* (T2-star). At the same time, but independently, some of the spins that had moved into the higher energy state give off their energy to their lattice and return to the lower energy state, causing the net magnetization to regrow along the z axis. This regrowth occurs at a rate given by the tissue relaxation parameter known as T1. The total NMR signal is a combination of the total number of nuclei (proton density), reduced by the T1, T2, and T2* relaxation components.
Magnet inhomogeneity
It is virtually impossible to construct an NMR magnet with perfectly uniform magnetic field strength, B0. Much additional hardware is supplied with NMR machines to assist in normalising the B0 field. However, it is inevitable that an NMR sample will experience different B0's across its body so that nuclei comprising the sample (that exhibit spin) will have different precessional frequencies (according to the Larmor equation). Immediately following a 90 degree pulse, a sample will have Mx-y coherent. However, as time goes on, phase differences at various points across the sample will occur due to nuclei precessing at different frequencies. These phase differences will increase with time and the vector addition of these phases will reduce Mx-y with time.
T1 relaxation
Following termination of an RF pulse, nuclei will dissipate their excess energy as heat to the surrounding environment (or lattice) and revert to their equilibrium position. Realignment of the nuclei along B0, through a process known as recovery, leads to a gradual increase in the longitudinal magnetisation. The time taken for a nucleus to relax back to its equilibrium state depends on the rate that excess energy is dissipated to the lattice. Let M-0-long be the amount of magnetisation parallel with B0 before an RF pulse is applied. Let M-long be the z component of M at time t, following a 90 degree pulse at time t = 0. It can be shown that the process of equilibrium restoration is described by the equation.



where T1 is the time taken for approximately 63% of the longitudinal magnetisation to be restored following a 90 degree pulse.


T2 relaxation
While nuclei dissipate their excess energy to the lattice following an RF pulse, the magnetic moments interact with each other causing a decrease in transverse magnetisation. This effect is similar to that produced by magnet inhomogeneity, but on a smaller scale. The decrease in transverse magnetisation (which does not involve the emission of energy) is called decay. The rate of decay is described by a time constant, T2*, that is the time it takes for the transverse magnetisation to decay to 37% of its original magnitude. T2* characterises dephasing due to both B0 inhomogeneity and transverse relaxation. Let M-0-trans be the amount of transverse magnetisation (Mx-y) immediately following an RF pulse. Let M-trans be the amount of transverse magnetisation at time t, following a 90 degree pulse at time t = 0. It can be shown that
Mtrans = Motrans.e-t/T2*

In order to obtain signal with a T2 dependence rather than a T2* dependence, a pulse sequence known as the spin-echo has been devised which reduces the effect of B0 inhomogeneity on Mx-y. A pulse sequence is an appropriate combination of one or more RF pulses and gradients (see next section) with intervening periods of recovery. A pulse sequence consists of several components, of which the main ones are the repetition time (TR), the echo time (TE), flip angle, the number of excitations (NEX), bandwidth and acquisition matrix.



The signal intensity
The signal intensity on the MR image is determined by four basic parameters: 1) proton density, 2) T1 relaxation time, 3) T2 relaxation time, and 4) flow. Proton density is the concentration of protons in the tissue in the form of water and macromolecules (proteins, fat, etc). The contrast on the MR image can be manipulated by changing the pulse sequence parameters. A pulse sequence sets the specific number, strength, and timing of the RF and gradient pulses.
The two most important parameters are the repetition time (TR) and the echo time (TE). The TR is the time between consecutive 90 degree RF pulse. The TE is the time between the initial 90 degree RF pulse and the echo. The most common pulse sequences are the T1- weighted and T2-weighted spin-echo sequences. The T1-weighted sequence uses a short TR and short TE (TR < 1000msec, TE < 30msec). The T2-weighted sequence uses a long TR and long TE (TR > 2000msec, TE > 80msec). The T2-weighted sequence can be employed as a dual echo sequence. The first or shorter echo (TE < 30msec) is proton density (PD) weighted or a mixture of T1 and T2. This image is very helpful for evaluating periventricular pathology, such as multiple sclerosis, because the hyperintense plaques are contrasted against the lower signal CSF. More recently, the FLAIR (Fluid Attenuated Inversion Recovery) sequence has replaced the PD image. FLAIR images are T2-weighted with the CSF signal suppressed.


One of the great advantages of MRI is its excellent soft-tissue contrast which can be widely manipulated. In a typical image acquisition the basic unit of each sequence is repeated hundreds of times over. By altering the echo time (TE) or repetition time (TR), i.e. the time between successive 90° pulses, the signal contrast can be altered or weighted. For example if a long TE is used, inherent differences in T2 times of tissues will become apparent. Tissues with a long T2 (e.g. water) will take longer to decay and their signal will be greater (or appear brighter in the image) than the signal from tissue with a short T2 (fat). In a similar manner TR governs T1 contrast. Tissue with a long TR (water) will take a long time to recover back to the equilibrium magnetisation value, so therefore a short TR interval will make this tissue appear dark compared to tissue with a short T1 (fat). When TE and TR are chosen to minimise both these weightings, the signal contrast is only derived from the number or density of spins in a given tissue. This image is said to be 'proton-density weighted'. To summarise:1). T2-weighting requires long TE, long TR 2).T1-weighting requires short TE, short TR 3). PD-weighting requires short TE, long TR
Spatial Localisation
The actual location within the sample from which the RF signal was emitted is determined by superimposing magnetic field gradients on the magnet generating the otherwise homogeneous external magnetic field B0. For example, a magnetic field gradient can be superimposed by placing two coils of wire (wound in opposite directions) around the B0 field with longitudinal axis orientated in the z direction and then by passing direct current through the coils. The magnetic field from the coil pair adds to the B0 field, with the result that one end of the magnet has a higher field strength than the other known as magnetic gradient. According to the Larmor equation, the magnetic field gradient causes identical nuclei to precess at different Larmor frequencies. The frequency deviation is proportional to the distance of the nuclei from the centre of the gradient coil and the current flowing through the coil.


Slice Selection

Frequency Encoding
Three magnetic field gradients, placed orthogonally to one another inside the bore of the magnet, are required to encode information in three dimensions. With a slice selected and excited as described in the previous paragraph, current is switched to one of the two remaining gradient coils (referred to as the frequency encoding gradient). This has the effect of spatially encoding the excited slice along one axis, so that columns of spins perpendicular to the axis precess at slightly different Larmor frequencies. For a homogeneous sample, the intensity of the signal at each frequency is proportional to the number of protons in the corresponding column.
The frequency encoding gradient is turned on just before the receiver is gated on and is left on while the signal is sampled or read out. For this reason the frequency encoding gradient is also known as the readout gradient. The resulting FID is a graph of signal (formed from the interference pattern of the different frequencies) induced in the receiver verses time. If the FID is subjected to Fourier transform, a conventional spectrum in which signal amplitude is plotted as a function of frequency can be obtained. Thus, a graph of signal verses frequency is obtained which corresponds to a series of lines or views representing columns of spins in the slice.
Phase Encoding
Suppose a slice through a homogeneous sample has been selected and excited as described in Slice Selection section, and then frequency encoded according to the previous section. After a short time, the phase of the spins at one end of the gradient leads those at the other end because they are precessing faster. If the frequency encoding gradient is switched off, spins precess (once more) at the same angular velocity but with a retained phase difference. This phenomenon is known as phase memory.
A phase encoding gradient is applied orthogonally to the other two gradients after slice selection and excitation, but before frequency encoding. The phase encoding gradient does not change the frequency of the received signal because it is not on during signal acquisition. It serves as a phase memory, remembering relative phase throughout the slice.
To construct a 256 x 256 pixels image a pulse sequence is repeated 256 times with only the phase encoding gradient changing. The change occurs in a stepwise fashion, with field strength decreasing until it reaches zero, then increasing in the opposite direction until it reaches its original amplitude. At the end of the scan, 256 lines (one for each phase encoding step) comprising 256 samples of frequency are produced. A Fourier transformation allows phase information to be extracted so that a pixel (x, y) in the slice can be assigned the intensity of signal which has the correct phase and frequency corresponding to the appropriate volume element. The signal intensity is then converted to a grey scale to form an image.

MRI Sequences
MRI signal intensity depends on many parameters, including proton density, T1 and T2 relaxation times. Different pathologies can be selected by the proper choice of pulse sequence parameters. Repetition time (TR) is the time between two consecutive RF pulses measured in milliseconds. For a given type of nucleus in a given environment, TR determines the amount of T1 relaxation. The longer the TR, the more the longitudinal magnetisation is recovered. Tissues with short T1 have greater signal intensity than tissues with a longer T1 at a given TR. A long TR allows more magnetisation to recover and thus reduces differences in the T1 contribution in the image contrast. Echo time (TE) is the time from the application of an RF pulse to the measurement of the MR signal. TE determines how much decay of the transverse magnetisation is allowed to occur before the signal is read. It therefore controls the amount of T2 relaxation. The application of RF pulses at different TRs and the receiving of signals at different TEs produces variation in contrast in MR images. Next some common MRI sequences are described.
Spin Echo Pulse Sequence
The spin echo (SE) sequence is the most commonly used pulse sequence in clinical imaging. The sequence comprises two radiofrequency pulses - the 90 degree pulse that creates the detectable magnetisation and the 180 degree pulse that refocuses it at TE. The selection of TE and TR determines resulting image contrast. In T1-weighted images, tissues that have short T1 relaxation times (such as fat) present as bright signal. Tissues with long T1 relaxation times (such as cysts, cerebrospinal fluid and edema) show as dark signal. In T2-weighted images, tissues that have long T2 relaxation times (such as fluids) appear bright.
In cerebral tissue, differences in T1 relaxation times between white and grey matter permit the differentiation of these tissues on heavily T1-weighted images. Proton density-weighted images also allow distinction of white and grey matter, with tissue signal intensities mirroring those obtained on T2-weighted images. In general, T1-weighted images provide excellent anatomic detail, while T2-weighted images are often superior for detecting pathology.
Gradient Recalled Echo Pulse Sequences
Gradient recalled echo (GRE) sequences, which are significantly faster than SE sequences, differ from SE sequences in that there is no 180 degree refocusing RF pulse. In addition, the single RF pulse in a GRE sequence is usually switched on for less time than the 90 degree pulse used in SE sequences. The scan time can be reduced by using a shorter TR, but this is at the expense of the signal to noise ratio (SNR) which drops due to magnetic susceptibility between tissues. At the interface of bone and tissue or air and tissue, there is an apparent loss of signal that is heightened as TE is increased. Therefore it is usually inappropriate to acquire T2-weighted images with the use of GRE sequences. Nevertheless, GRE sequences are widely used for obtaining T1-weighted images for a large number of slices or a volume of tissue in order to keep scanning times to a minimum. GRE sequences are often used to acquire T1-weighted 3D volume data that can be reformatted to display image sections in any plane. However, the reformatted data will not have the same in-plane resolution as the original images unless the voxel dimensions are the same in all three dimensions.
Paramagnetic Contrast Agents
MRI contrast agents are primarily paramagnetic agents designed to enhance the T1 and T2 relaxation times of adjacent hydrogen nuclei. Some agents are classified as T1 active or T2 active. They produce complex effects that vary depending on the RF pulsing sequence. For example, T1 shortening increases the RF signal intensity but T2 shortening decreases it. In many instances paramagnetic contras agents permit the visualization of lesion with shorter TR, thus decreasing scan time.
Paramagnetic contrast agents have been developed for oral, intravenous, and inhalation administration, and although this is an active research area in MRI, at the present time the IV agents have predominated. Gadolinium+3 (Gd+3), which has seven unpaired electrons, has the strongest relaxation rate properties and has proven effective in demonstrating various types of lesions. However, it is extremely toxic and is administered in a complex with DTPA (diethylenetriaminepentaacetic acid) (Gd-DTPA) to ensure detoxifification.
MRI Artefact
The term artefact refers to the occurrence of undesired image distortions, which can lead to misinterpretation of MRI data. The theoretical limit of the precision of measurements obtained from medical images is determined by the point spread function of the imaging device (Rossmann (1969) and Robson et al. (1997)). However, in practice, the limit is determined by the physiological movements of a living subject (e.g. respiration, heartbeat, twitching or tremor). The finite thickness of the slice of tissue imaged may also represent a constraint. If the signals arising from different tissue compartments cannot be separated within each voxel, then an artefact known as partial voluming is produced. This uncertainty in the exact contents of any voxel is an inherent property of the discretised image and would even exist if the contrast between tissues were infinite. Furthermore, chemical shift and susceptibility artefacts (Schenck (1996)), magnetic field and radio frequency non-uniformity, and Field of View and slice thickness calibration inaccuracies can all compromise the accuracy with which quantitative information can be obtained for a structure of interest in the living human body. A detailed analysis of all these effects is, however, beyond the scope of this article.
Gibbs Ringing or Truncation Artefact
This arises due to the finitie nature of sampling. According to Fourier theory, any repetitive waveform can be decomposed into an infinite sum of sinusoids with a particualr amplitude, phase and frequency. In practice, a waveform (e.g. MRI signal) can only be sampled or detected over a given time period and therefore the signal will be under-represented. The artefact is prominent at the interface between high and low signal boundaries and results in a 'ringing' or a number of discrete lines adjacent to the high signal edge.
Phase-wrap or 'Aliasing'
Aliasing can occur in either the phase or frequency direction but is mainly a concern in the phase direction. It is a consequence of Nyquist theory: the sampling rate has to be at least twice that of the highest frequency expected. The effect occurs whenever there is an object or patient anatomy outside the selected field-of-view but within the sensitive volume of the coil. In the frequency direction, this is avoided by increasing the sampling and use of high pass filters. By swapping the direction of phase/frequency encoding or using larger or rectangular fields-of-view the effect can be avoided.
Motion Artefacts (Ghosting)
Ghosting describes discrete or diffuse signal throughout both the object and the background. It can occur due to instabilites within the system (e.g. the gradients) but a common cause is patient motion. When movement occurs the effect is mainly seen in the phase direction. This is because of the discrepancy between the time taken to encode the image in each direction. Frequency encoding, done in one go at the time of echo collection, takes a few ms whereas phase encoding requires hundreds of repetitions of the sequence, taking minutes. Motion causes anatomy to appear in a different part of the scanner such that the phase differences are no longer consistent. Periodic motion e.g. respiratory or cardiac motion can be 'gated' to the acquisiton so that the phase encoding is performed at the 'same' part of the cycle. This extends imaging time as the scanner 'waits' for the appropriate signal but is effective in combating these artefacts. Modern scanners are now so fast that 'breathold' scans are replacing respiratoy compensation. Non-periodic motion e.g. coughing, cannot easily be remedied and patient co-operation remains the best method of reducing these artefacts. In this simple experiment a test object is moved gently during the scan. The effect is dramatic and due to the fourier transform nature of MRI, even this small displacment has produced artefacts throughout the image (the image is shown twice with different 'window' settings to enable the full extent of the artefact to be seen).
Chemical Shift
This artefact arises due to the inherent differences in the resonant frequency of the two main components of an MR image: fat and water. It is only seen in the frequency direction. At 1.5 Tesla there is approximately 220 Hz difference in the fat-water resonance frequency. If this frequency range has not been accommodated in the frequency encoding (governed by the receiver bandwidth and matrix size) then adjacent fat and water in the object will artificially appear in separate pixels in the final image. This leads to a characterisitic artefact of a high signal band (where the signal has 'built up') and an opposite dark band (signal void).
Susceptibility
The susceptibility of a material is the tendancy for it to become magnetised when placed in a magnetic field. Materials with large differences in susceptibility create local disturbances in the magnetic field resulting in non-linear changes of resonant frequency, which in turn creates image distortion and signal changes. The problem is severe in the case of ferromagnetic materials but can also occur at air-tissue boundaries. This example was acquired in a patient who had permanent dental work. It did not create any problems for the patient but the huge differences in susceptibility caused major distortions and signal void in the final image.
Other Artefacts
An RF or zipper artefact is caused by a breakdown in the integrety of the RF-shielding in the scan room. Interference from an RF source causes a line or band in the image, the position and width of which is determined by the frequencies in the source. A Criss-cross or Herringbone artefact occurs when there is an error in data reconstruction.

Hardware
An MR system consists of the following components: 1) a large magnet to generate the magnetic field, 2) shim coils to make the magnetic field as homogeneous as possible, 3) a radiofrequency (RF) coil to transmit a radio signal into the body part being imaged, 4) a receiver coil to detect the returning radio signals, 5) gradient coils to provide spatial localization of the signals, and 6) a computer to reconstruct the radio signals into the final image. Each component contributes for making the examination faster and easier.
Magnet
Thera are different types of magnet used for diagnostic and research imaging. Permanent magnet has a field strength limited to 0.064 Tesla – 0.3 Tesla. While resistive magnet is ranging from 0.1 – 0.4 Tesla. Super conductive magnet almost unlimited. It can be low as 0.15 T and can go up to 7 T or higher. Cryogens and refrigeration are required for superconductive magnet to keep the system cool to maintain its strength. The stronger the magnet the higher the signal to noise ratio (SNR). Higher gradient system which is measured as mT/m per axes helps to decreased repetition time (TR) and echo imaging time. Shorter TRs and TEs normally reduced the scanning time down to subsecond imaging. With gradient development it is now possible to pursue real time MR scanning.
Radio Frequency (RF) coils
The body part to be examined determines the shape of the antenna coil to be used for imaging. Most coils are round or oval-shaped. And the body part to be examined is inserted into the coil’s open center. Some coils, rather than encircling the body part, are placed directly on the patien over the area of interest. These “ surface coils” are best for thin body parts, such as the limbs, or superficial portions of a larger body structure, such as the orbit within the head or the spine within the torso. Another form of surface coils is the endocavitary coil, in which the imaging coils is designed to fit within a body cavity, such as rectum. This enables a surface coil to be placed close to some internal organs which may be distant from surface coils applied to the exterior body. Endocavitary coils also may be used to image the wall of the cavity itself.
Gradients
The principle role of the gradient coils are to produce linear chnages in magnetic field in each of the x,y and z directions. By combining gradients in pairs of directions, oblique imaging can be performed. Gradient specifications are stated in terms of a slew rate which is equal to the maximum achievable amplitude divided by the rise time. Typical modern slew rates are 150 T/m-s. The gradient coils areshielded in a similar manner to the main windings. This is to reduce eddy currents induced in the the cryogen which would degrade image quality.
Safety of MRI
MRI is generally considered safe. Since MRI does not use ionizing radiation. Nevertheless, a number potential safety issues concerning MRI must be raised, some related to potential direct effects on the patient from the imaging environment and others related to indirect hazards
Static Field Effects
The most obvious safety implication is the strength of the magnetic field produced by the scanner. There are three forces associated with exposure to this field: a translational force acting on ferromagnetic objects which are brought close to the scanner (projectile effect), the torque on patient devices/implants, and forces on moving charges within the body, most often observed as a superposition of ECG signal. In the main, sensible safety precautions and the screening of patients means that there are seldom any problems. Of major concern is the re-assessment of medical imaplants and devices deemed safe at 1.5 Tesla which may not have been tested at higher fields. This is becoming an issue as 3.0 T scanners become more commonplace.
The extension of the magnetic field beyond the scanner is called the fringe field. All modern scanners incorporate additional coil windings which restrict the field outside of the imaging volume. It is mandatory to restrict public access within the 5 Gauss line, the strength at which the magnetic field interfers with pacemakers.
Gradient Effects
These come under the term 'dB/dt' effects referring to the rate of change in field strength due to gradient switching. The faster the gradients can be turned on and off, the quicker the MR image can be acquired. At 60 T/s peripheral nerve stimulation can occurr, which although harmless may be painful. Cardiac stimulation ocurrs well above this threshold. Manufacturers now employ other methods of increasing imaging speed (so called 'parrallel imaging') in which some gradient encoding is replaced.
RF Heating Effects
The repetitive use of RF pulses deposits energy which in turn causes heating in the patient. This is expressed in terms of SAR (specific absorption rate in W/kg) and is monitored by the scanner computer. For fields up to 3.0 Tesla, the value of SAR is proportional to the square of the field but at high fields the body becomes increasingly conductive neccessitating the use increased RF power. Minor patient burns have resulted from use of high SAR scans plus some other contributory effect, e.g. adverse patient or coil-lead positioning, but this is still a rare event.
Noise
The scans themselves can be quite noisey. The Lorentz forces acting on the gradient coils due to current passing through them in the presence of the main field causes them to vibrate. These mechanical vibrations are transmitted through to the patient as acoustic noise. As a consequence patients must wear earplugs or head phones while being scanned. Again, this effect (actually the force on the gradients) increases at higher field and manufactures are using techniques to combat this including lining the scanner bore or attaching the gradient coils to the scan room floor thereby limiting the degree of vibration.

Claustrophobia
Depending on the mode of entry into the scanner (e.g. head first or feet first) various sites have reported that between 1 % and 10 % of patients experience some degree of claustrophobia which in the extreme cases results in their refusal to proceed with the scan. Fortunately, modern technology means that scanners are becoming wider and shorter drastically reducing this problem for the patient. In addition, bore lighting, ventilation as well as the playing of music all help to reduce this problem to a minimum.
Bioeffects
There are no known or expected harmful effects on humans using field strengths up to 10 Tesla. At 4 Tesla some unpleasant effects have been anedoctally reported including vertigo, flashing lights in the eyes and a metallic taste in the mouth. Currently pregnant women are normally excluded from MRI scans during the first trimester although there is no direct evidence to support this restriction. The most invasive MR scans involve the injection of contrast agents (e.g. Gd-DTPA). This is a colourless liquid that is administered i.v. and has an excellent safety record. Minor reactions like warm sensation at the site of injection or back pain are infrequent and more extreme reactions are very rare.

Summary
MRI uses magnetism and radio frequencies (RF) to create diagnostic sectional images of the body. If the nucleus is spinning, it has angular momentum, or nuclear magnetic moment. This rotating charge acts as a current loop and produces a magnetic field. When a rotating nucleus is subjected to a magnetic field, it will begin to precess. MR imaging is accomplished through various measurements of this movement of the nuclear magnetic field.
The frequency of procession is called the larmor frequency and is critical to MR imaging. Production of the nuclear magnetic resonace signal requires applying a larmor frequency alternating RF and then listening to the RF emissions from the proton. Digital Imaging reconstruction techniques are then used to create sectional and three-dimensional images.
The primary parameters used to modify and control the MRI process include proton spin density, repetition time (TR), echo time (TE), inversion time (TI), T1 and T2. The parameter that are used vary depending on the pulse sequence used. The RF signal strength determines brightness, although it is also affected by field strength, section thickness, the MRI parameter, motion, spatial resolution, S/N, and scan time.





Read More..