External calibration and internal standardization good practice

Hi,

I have a few questions about the calibration of a ICP-OES method. For my graduation study, I am in the process of developing a method for determining Al, Ca, Fe, K, Na, Ni, P, Si, V and Zn in a matrix akin to crude oil, using microwave assisted acid digestion.

When taking into account the dilutions in the sample prep, my measuring range goes from 0.08 - 10 ppm (which corresponds to 4 - 500 mg/kg in the sample). As i could not find any discussion on the subject i was wondering what is considered good laboratory practice when it comes to the calibration error and the %RSE of the calibration function. By default the maximum allowed calibration error is set to 5% in ICP Expert, is this also standard in most developed methods for ICP-OES or are lower (or higher) calibration errors somewhat common in this field. The same question goes for the %RSE of the calibration, lower is always better but what is deemed acceptable in general?

My other question is about the process of selecting internal standard wavelengths for a method. Our 5800 ICP system is outfitted with an AVS7 so we add our IS online, we currently use Yttrium as IS. It is often advised to match the IS emission wavelengths as closely as possible with your analytical lines, in terms of wavelength and state (atomic/ionic). As the recommended lines in ICP expert for Yttrium are all ionic (II), i am left wondering to what extent this matching is done in practice. It is also advised to match the excitation/ionization potentials of analytical and IS lines, but i haven't found a reliable resource to do so. There are many publications using Yttrium for multi-element analyses but often times the IS lines used are not named and the selection process is not described. 

At first i was trying out 4 lines for ionic and 4 atomic lines for the 10 elements i am quantifying. But as I don't feel that there's much logic or scientific reason in my selection process. I wonder if it would make more sense to use just 2 or 3 lines in total for all elements.

In general i need to read up on it more, but if anyone has some wisdom to share in the meantime that's more than welcome! Also, if more information is need please let me know.

  • You can have a look in this webinar, if you haven't done already. www.agilent.com/.../crude-oil-analysis. It will not answer all of your questions but, it is high recommended. 

  • In my experience, if you are working within the linear range for a given wavelength, 5 to 10% calibration error is reasonable. The maximum % error that you set depends on the accuracy you require, and whether the wavelength is linear over the concentration range.  %RSE is just another way to evaluate the accuracy of each point on the curve, and is an alternative to correlation coefficient. I don't know of an "acceptable" value, it depends on the accuracy your laboratory requires. 

  • With respect to internal standard selection, theoretically you can try to match the excitation energy of the internal standard wavelength to that of the analyte wavelength. Practically, a lot of labs use Y 371nm or Sc 361nm.  You can also use certain argon lines as internal standards as well. Good method development involves trying both argon and Y or Sc or another element not present in the sample/standards.  I have found argon to be useful when analyzing high salt samples and viewing axially. If you view the analytes axially, make sure you view the internal std axially as well.

Was this helpful?