I'm currently validating a developed method in a serum matrix (blood component) but when I started the Specificity study or chemical spiking of the matrix I've have gotten back some highly elevated recoveries for P and Ca that I cannot figure out.
Some details for context:
Rinse and sample diluent - 1% nitric with <1% Triton X-100
Online Internal Standard - 10 ppm Rh & Sc also containing a 2.5% caesum buffer as a ionisation suppressant.
Elements of interest include P, Ca, Na, K, Mg
Reading Mode - Axial for P, Mg and Radial for Ca, Na, K
Instrument - ICP-OES 5100 with SPS4 autosampler
The calibration curves form perfectly with low % error on the points. The read-back of the standards as samples are okay. Running a serum reference material returns a acceptable value for P and Ca as stated by manufacturer and even chemical spiking of the reference material gives a recover of close to 100%. Adding a known chemical spike to the diluent returns correct spike value for Ca and P. However when I add a known chemical spike to the actual sample matrix itself, the values for P and Ca are normally highly elevated but in some samples it gives a normal recovery of 100% and in some other samples it gives a low recovery for Ca.
The system is also very clean and I've replaced any components of the introduction system with new clean components to see if there was any difference.
To me the issue seems to be related to the actual serum samples and not the method as from the results for the calibration standard read back and correct spike recovery from the reference material it would seem the method is working as I should be.
If anyone has any suggestions for me to try or experience in this, I'd appreciate your feedback.