Agilent 5800 ICP-OES: Argon gas as an internal standard?

  During the set-up of our 5800 our Agilent technician created our default template to use Argon gas as the ISTD for analysis.  I am still not sure what the reasoning may have been for this- especially as we are noticing that the Argon ratio is increasing for higher solids loading during calibration and analysis.  Obviously, with the Argon concentration changing as the calibration concentration increases- we are getting very poor fit/correlation for our standard curves.

  We have since disabled this set-up with much better results- but I have 2 questions as a result.  We are running an aqueous based system (no organics):

  1. Is there any reason the instrument may have been set up this way?  I can't see any advantages to using the Ar gas as the ISTD (other than the non-interfering wavelength).  
  2. Is the Ar ratio (it's typically around .95-1.05 for low concentration solids samples up to ~ 20 ppm for us) increasing to 1.5-3.0 for higher solids samples (>25 ppm or so) just indicative of the instrument adjustments for the sample combustion- or are there other factors that effect this- and what is considered atypical for the Argon gas ratio where we should address it?
  • Unless you are strictly matrix matching your calibration standards to your samples, it is best to use an internal standard to account for matrix effects. This is standard practice in ICP-OES. Most people use Yttrium or Scandium, an element that is not naturally present in your samples, as the internal standard. You can add the internal standard via the peri pump, or manually spike each solution. The concentration of the internal standard is typically 2-5ppm for Y or Sc, and the wavelengths used are typically Y 371nm and Sc 360nm. There are many papers on matching the excitation energies of the internal standard to those of the analytes, but in practice, if your analyte wavelength is less than 400nm, then use the Y 371nm or Sc 360nm wavelengths. If the analyte wavelengths are higher than 400nm, try to find an internal standard wavelength higher than 400nm as well. The software will do the ratio and subsequent correction for you when you turn internal standardization on. You may also choose argon 420nm as an internal standard, for example to monitor changes in the plasma.  I would suggest adding Y, Sc, and Ar as your internal standards, and compare the results to find the best one for your particular sample matrix. I would also suggest spiking a sample and checking your spike recovery every 10 samples or so, to see which one gives you the best accuracy. You can go to the elements tab, and change the wavelength that the analyte is being ratioed to, and the software will automatically re-calculate the results on the analysis tab.  

    As far as using argon as the internal standard, it is easy because argon is naturally occurring in the plasma and there is nothing to add. However, results are mixed: I have gotten very good results with argon 420nm for samples with high % total dissolved solids, especially with high alkali metal concentrations, like high Na (100's to 1000's of ppm). But I would say that the results are not always consistent. It depends on your plasma conditions, the wavelengths used, etc. I hope this helps you.

Was this helpful?