Agilent 5800 ICP-OES: Argon gas as an internal standard?

  During the set-up of our 5800 our Agilent technician created our default template to use Argon gas as the ISTD for analysis.  I am still not sure what the reasoning may have been for this- especially as we are noticing that the Argon ratio is increasing for higher solids loading during calibration and analysis.  Obviously, with the Argon concentration changing as the calibration concentration increases- we are getting very poor fit/correlation for our standard curves.

  We have since disabled this set-up with much better results- but I have 2 questions as a result.  We are running an aqueous based system (no organics):

  1. Is there any reason the instrument may have been set up this way?  I can't see any advantages to using the Ar gas as the ISTD (other than the non-interfering wavelength).  
  2. Is the Ar ratio (it's typically around .95-1.05 for low concentration solids samples up to ~ 20 ppm for us) increasing to 1.5-3.0 for higher solids samples (>25 ppm or so) just indicative of the instrument adjustments for the sample combustion- or are there other factors that effect this- and what is considered atypical for the Argon gas ratio where we should address it?
Parents
  • One reason for using Argon as the internal standard is because it is always in the plasma at the same amount but a more traditional internal standard like Yttrium or Scandium must either be spiked manually into every solution, or it must be added online via the peri pump. It is a way to monitor the stability of the plasma without having to add anything. There are pros and cons to each type of internal standard. What is the wavelength for the Argon you are using? 

    You say your 20ppm is low solids and the >25ppm is high solids. I am confused by that. I would consider a high solids sample to be in the % range. The 5800 vertical torch can handle up to 20-30% high solds. I am using the definition of g/100ml here for high solids. 

    Typically when the ratio of the internal std is < 1.0 it means there is a suppression of signal, or a matrix effect.  When the ratio is >1.0, it usually means something is going on in the sample intro area, like a leak or the tubing has come out of the sample and there is air being taken into the plasma.

  • Tina,

     Thanks for your reply.  We are using 430.01 nm for Argon.  When I said high solids- I guess I meant relative to each other:  That is- it's ONLY our higher concentration standards (20ppm, 50ppm, 100 ppm) where we are seeing the Argon ratio increase.  The 100 ppm standard (and samples where the concentration is this high as well) can yield Argon ratios over 3 (for the 100 pppm).   I don't believe there are any leaks being introduced in the sample uptake;  I can switch back to a lower concentration and immediately observe the Ar ratio return to ~ 1.0.

     Are there any other factors that could cause the ratio to rise in the manner described above?  Is there anything else you would suggest that we could try from the method side to see if we can alleviate this issue?  In addition to the higher Ar ratio- there seems to be some suppression of (expected) signal at the higher concentrations as well as we are seeing lower than expected intensities.  Our standard solutions are a matrix of 10-15 metals in 5% HNO3/Water (and 2% HF where required for some metals).

Reply
  • Tina,

     Thanks for your reply.  We are using 430.01 nm for Argon.  When I said high solids- I guess I meant relative to each other:  That is- it's ONLY our higher concentration standards (20ppm, 50ppm, 100 ppm) where we are seeing the Argon ratio increase.  The 100 ppm standard (and samples where the concentration is this high as well) can yield Argon ratios over 3 (for the 100 pppm).   I don't believe there are any leaks being introduced in the sample uptake;  I can switch back to a lower concentration and immediately observe the Ar ratio return to ~ 1.0.

     Are there any other factors that could cause the ratio to rise in the manner described above?  Is there anything else you would suggest that we could try from the method side to see if we can alleviate this issue?  In addition to the higher Ar ratio- there seems to be some suppression of (expected) signal at the higher concentrations as well as we are seeing lower than expected intensities.  Our standard solutions are a matrix of 10-15 metals in 5% HNO3/Water (and 2% HF where required for some metals).

Children
  • Which argon wavelength are you using?  Have you tried using multiple argon wavelengths to see if you see the same trend?  

  • We are using 430.01nm...I am not sure why- but we will look into using a different one.  Any suggestions?

    Having said that- our Ar counts are doubling from 7 million to 14 million on some samples/standards- and the higher Ar counts  always correlates with higher concentration samples/standards.  There isn't any reason the Ar intensity/ratio should be increasing with sample concentration is there?

  • I have always found 420nm to be good.  Depending on what elements are present in your samples and standards, the increasing concentrations could be causing the plasma to run hotter and therefore the argon intensities are increasing.  In the software, you can turn the internal std ratio "off". On the Elements tab, select the internal std drop-down and select "none" and the data will be automatically reprocessed.  See what your data looks like without the internal standard. 

    You could always add Y or Sc as an internal std by using a y-piece and extra piece of peri pump tubing. Make up a 5ug/ml solution of Y or Sc and select Y 371nm or Sc 361nm as your wavelength.

Was this helpful?