During the set-up of our 5800 our Agilent technician created our default template to use Argon gas as the ISTD for analysis. I am still not sure what the reasoning may have been for this- especially as we are noticing that the Argon ratio is increasing for higher solids loading during calibration and analysis. Obviously, with the Argon concentration changing as the calibration concentration increases- we are getting very poor fit/correlation for our standard curves.
We have since disabled this set-up with much better results- but I have 2 questions as a result. We are running an aqueous based system (no organics):
- Is there any reason the instrument may have been set up this way? I can't see any advantages to using the Ar gas as the ISTD (other than the non-interfering wavelength).
- Is the Ar ratio (it's typically around .95-1.05 for low concentration solids samples up to ~ 20 ppm for us) increasing to 1.5-3.0 for higher solids samples (>25 ppm or so) just indicative of the instrument adjustments for the sample combustion- or are there other factors that effect this- and what is considered atypical for the Argon gas ratio where we should address it?