Titrimetry, in which we measure the volume of a reagent reactingstoichiometrically with the analyte, first appeared as an analyticalmethod in the early eighteenth century. Unlike gravimetry, titrimetryinitially did not receive wide acceptance as an analytical technique.Many prominent late-nineteenth century analytical chemists preferredgravimetry over titrimetry and few of the standard texts from that erainclude titrimetric methods. By the early twentieth century, however,titrimetry began to replace gravimetry as the most commonly usedanalytical method.
Interestingly, precipitation gravimetry developed in the absence ofa theory of precipitation. The relationship between the precipitate's mass and the mass of analyte, called a gravimetric factor , was determined experimentally by taking known masses of analyte (anexternal standardization). Gravimetric factors could not be calculatedusing the precipitation reaction's stoichiometry because chemical formulas and atomic weights were not yet available! Unlike gravimetry,the growth and acceptance of titrimetry required a deeper understanding of stoichiometry, thermodynamics, and chemicalequilibria. By the early twentieth century the accuracy and precision oftitrimetric methods were comparable to that of gravimetry,establishing titrimetry as an accepted analytical technique.
Overview of Titrimetry titrimetry
Any method in which volume is the signal. titrant The reagent added to a solution containing the analyte and whose volume is the signal. equivalence point Titrimetric methods are classified into four groups based on the type of reaction involved. These groups are acid–base titrations, in which an acidic or basic titrant reacts with an analyte that is a base or an acid; complexometric titrations involving a metal–ligand complexation reaction; redox titrations, where the titrant is an oxidizing or reducing agent; and precipitation titrations, in which the analyte and titrant react to form a precipitate. Despite the difference in chemistry, all titrations share several common features, providing the focus for this section.
Equivalence Points and End Points
The point in a titration where stoichiometrically equivalent amounts of analyte and titrant react. end point For a titration to be accurate we must add a stoichiometrically equivalent amount of titrant to a solution containing the analyte. We call this stoichiometric mixture the equivalence point. Unlike precipitation gravimetry, where the precipitant is added in excess, determining the exact volume of titrant needed to reach the equivalence point is essential. The product of the equivalence point volume, Veq, and the titrant’s concentration, CT, gives the moles of titrant reacting with the analyte.
Moles titrant = Veq × CT
Knowing the stoichiometry of the titration reaction(s), we can calculate the moles of analyte. The point in a titration where we stop adding titrant. indicator A colored compound whose change in color signals the end point of a titration. titration error The determinate error in a titration due to the difference between the end point and the equivalence point. Unfortunately, in most titrations we usually have no obvious indication that the equivalence point has been reached. Instead, we stop adding titrant when we reach an end point of our choosing. Often this end point is indicated by a change in the color of a substance added to the solution containing the analyte. Such substances are known as indicators. The difference between the end point volume and the equivalence point volume is a determinate method error, often called the titration error. If the end point and equivalence point volumes coincide closely, then the titration error is insignificant and can be safely ignored. Clearly, selecting an appropriate end point is critical if a titrimetric method is to give accurate results.
No comments:
Post a Comment