This laboratory supplements some of the material covered in Chem-351 Physical Chemistry-I lectures. The lab grade is separate than the final course grade corresponding to one credit. Students work in pairs and perform the labs according to the schedule of experiments. Each experiment requires reading assignments of important background material.The background reading is necessary that each student should have a feeling for the experimental method and the equipment used in the experiment. The assigned material in the lab text, handouts, and other sources should be read before coming into the lab. Preparedness will help the student be more confident with the experiment and work more efficiently. A bound lab notebook is required. All data taken in the lab should be entered directly into this. Recording data in a lab notebook is an important practice in documentation that will follow you throughout your career. After an experimental setup has been assembled and before starting the procedure, have the instructor check the setup and ask the instructor to describe any points of misunderstanding or confusion. Before leaving the lab the student should make a mental rundown of the calculations to be performed to be certain of having all the data needed. This includes the very important errors estimated for the measurements made. The main criteria for assigning grades to a report will be how well the report communicates the results, describes their derivation, and demonstrate  comprehension of the experiment. Reports will be due one week after completing the lab.

 

PHYSICAL CHEMISTRY LAB SAFETY

 

After being instructed in the health and safety aspects of this laboratory, the students have the responsibility to work in a safe manner. The safety policy of this lab consists of this sheet, and specific safety warnings for each experiment given in the lab text and instructor- supplied handouts. Most common hazards in this Physical Chemistry Laboratory are: 

       1.High voltage electric circuits.  
       2.Harmful chemical vapors and caustics.  
       3.High and low pressures in glass vessels.  
       4.Hot glassware.  
       5.No open flames are ever to be used in this lab. 

Students are required to wear eye protection when in contact with above hazards. Students should familiarize themselves in the lab room with regard to the locations of the fire extinguisher, eye washer, shower, fire blanket, and EXITS. 

EATING and SMOKING, and the use of cellular phones are  PROHIBITED in the laboratory.

 

LABORATORY REPORT

 

A written lab report will be handed in by each student for each experiment. There are two objectives of the lab reports of approximately equal value. The reports should be written from the perspective of communicating to any interested outside observer the important details of the experiment, as if the experiment had never been done before and the results are new. The reports also should be prepared to demonstrate to the instructor a comprehension of the principles of the experiment, the associated analysis of the data and uncertainties, and an interpretation of the results. Preparation of lab reports should be carried out with these two objectives in mind. Lab reports should be neat, but typing is not necessary.  
  Outline of a Lab Report (i.e., section headings) 
  Cover Sheet - available in lab. (The student will be advised what final results needs to appear on the cover sheet for each experiment.)  
  Experimental Procedure - briefly indicate deviations from the procedure in the manual or handout.  
  Primary Data - neatly labeled tables of the data taken in the laboratory.  
  Calculations - a representative set of calculations, starting with the primary data and working toward the necessary derived quantities. This should not simply be gobs of equations, but include a bit of prose between each equation explaining what is being done.  
  Results - appropriate tables of derived quantities and recorder charts.  
  Error Analysis - include representative calculations demonstrating the propagation of uncertainties.  
  Discussion - describe any deviations from theoretical or accepted values.  
  Define the major source of error. (Which measurement introduced the major source of error?) 
  Answer question posed by the lab text or handout.  
  Appendix - Include a photocopy of original data (from your bound lab notebook) and any computer printouts.  Since students will work in pairs and since several pairs will be preparing reports at the same time, there may be a temptation to copy. Students must present their own work for evaluation. Although students are free to discuss the experiments, lab reports that are too nearly the same will automatically be marked 0 (zero point) for plagiarism.

 

WRITING THE LAB REPORT

 

After each experiment, a report must be prepared according to the instructions below and submitted no later than the beginning of the next week's lab session in order to be eligible for receiving credit. Any questions concerning the preparation of the report should be asked beforehand so that the report can be completed when it is due. Each student should have one lab notebook (harita metod defteri) in which the reports should be written neatly in ink. Messy, illegible and incomplete reports will receive no credit. All pages of the notebook should be numbered in the beginning. If you want to cancel a page for some reason, do not tear it out. Just cross out the whole page. The report should consist of the following parts, presented in the following order: 

  1. DATE: The date on which the experiment was carried out should be written in the upper right hand corner of a new page on which the report begins. 

  2. TITLE: The title of the experiment should be centered below the date. 

  3. OBJECT: The purpose of performing the experiment should be stated clearly and briefly in your own words. The object should not be a repetition of the 
title of the experiment. 

  4. APPARATUS AND SET-UP: A list of the instruments used (excluding glassware) and a drawing of the set-up (if necessary) should be given. 

  5. REAGENTS: The chemical reagents used should be listed and their concentrations should be indicated. 

  6. DATA: The data should be recorded in ink on the special sheets provided in the lab book at the end of each experiment, and the signature of a lab instructor should be obtained before leaving the lab. While preparing the report, this sheet should be torn off the book and pasted into the notebook. If you make a mistake while recording the data, simply put a line through the wrong value and write the correct value in an appropriate space above the crossed-out value. Do not recopy the data on another sheet; submit the original signed data sheet. 

  7. CALCULATION: Each step of the calculations performed should be shown clearly so that they can be followed easily. Calculations that are not given in proper order will receive no credit. If the same calculation is to be performed on a set of data, you can show the calculation only once and indicate the remaining results in tabular form. All numerical values should be accompanied by their proper units. 

  8. CONCLUSION: The conclusion should consist of the following parts: 

  9. GRAPHS: Graphs should be drawn on millimetric graph paper, using a separate sheet for each graph. Each graph should be pasted firmly on a separate page of your notebook. A title and properly labelled axes should be presented on each graph. When plotting graphs, give some thought to the choice of scale to make the best use of the paper. Do not squeeze the plot into a small space. Each data point and error bar (whenever applicable) should be clearly indicated. If any points are omitted, the reason for doing so should be explained in the conclusion. 

       A) A statement indicating whether the observations are in accordance with the predictions of the theory. 

       B) An error discussion for each result reported, including 

  1) a list of the possible sources of random and systematic errors; 

  2) a clear statement of the best numerical estimate and uncertainty in proper units, rounded to the correct number of significant figures; 

  3) a comparison of the result with an accepted result, whose reference (including the name of the author and the book or journal, the publisher, year of publication and the page number) should be given (If no literature value can be found, the list of all the relevant sources searched should be supplied).  The essential principles of error discussion have been summarized below. For further information and worked examples, you may consult the reference book[1].  
[1] Taylor, J.R., An Introduction to Error Analysis, University Science Books, California, 1982. 

ERROR DISCUSSION 

"Error" in a scientific measurement means the inevitable uncertainty that is present in the measurement. This does not include mistakes which can be avoided by doing the experiment carefully. All experiments must be performed carefully and correctly to yield meaningful results. However, there are some errors that are still present no matter how carefully a measurement is performed. These may be of two types: random and systematic. Random errors arise from small 
differences in observation. For example, when the number of seconds required for a reaction mixture to change color is measured a few times, it is likely that the results will not be the same each time, but will vary by a small amount. As the reaction time of the experimenter in starting and stopping the watch is not constant, it will turn out that the measured value is a little smaller than the true value one time and a little larger another time. If many readings are taken, the amount of error lying on each side of the true value can be assumed to be equal on the average. Therefore, the average of a sufficiently large number of measurements would be equal to the true value if no other errors were present. Unfortunately, there is another type of error called systematic error. Systematic errors affect each measurement by the same amount and in the same direction. They are constant errors that arise because the conditions of the experiment are not exactly like those required by theory or because the measuring instrument has a faulty calibration. For example, if the watch used for timing the color change were slow by one tenth of a second during the period measured, each trial would appear to have taken one tenth of a second less than the true time, and therefore, the average of these trials would also be one tenth of a second less. Thus, this type of error can not be eliminated by performing a large number of measurements. Often, it is not even possible to guess its magnitude unless there are specialized standard instruments of high accuracy against which the measuring instruments can be calibrated. Accuracy is a measure of how close to the true value a measured value is, while precision indicates how close to each other the repeated measurements are. High precision indicates that random errors are small, however, it does not imply high accuracy. Measurements that are 
reproducible may still be far away from the true value, due to the presence of systematic errors. Even high accuracy does not mean that there are no systematic errors. It may be the case that there are no systematic errors, but it is equally likely that systematic errors of equal magnitude in opposite directions may have cancelled each other out. Knowing that all measurements are subject to error, it would be meaningless to report a measurement without an indication of the amount of error or uncertainty associated with it. It is good scientific practice to report measured or calculated values in the following manner: xbest ± dx where xbest is the best estimate of the value and dx is the uncertainty of this best estimate. The various methods of estimating the best value and the uncertainty for different types of measurements are given below. 

1. WHEN THE SAME VALUE IS MEASURED n TIMES: In this case, the best value is the mean (or average) of the n readings. The uncertainty in each one of the measurements may be assumed to be equal to the average uncertainty of the n measurements, called the standard deviation (or probable error). The uncertainty of the mean or best value may be expressed as the standard deviation of the mean (or standard error) or confidence limit. These terms are defined below: The mean,  The standard deviation, and The standard deviation of the mean.

When the uncertainty in the mean is expressed as the standard deviation of the mean, there is no further indication of how close a guess this is. However, when the confidence limit, l, is used, it is possible to tell the percentage of confidence in the estimate of uncertainty. This is especially important when the number of repeated measurements is small (n < 20). The confidence limit, lp, is: lp = tp s where p denotes the percentage of confidence, which is a measure of the reliability of the limits of error set by lp. For example, "5.5 ± 0.1 with 90% confidence" means 90 out of 100 measurements of the value whose best estimate is 5.5 is likely to give a result between 5.4 and 5.6. Obviously, it is more difficult to make such a generalization when the number of measurements actually 
carried out is small. Therefore, the smaller n is, the larger lp must be for the same percentage of confidence. In order to make this adjustment in lp, s is multiplied by the constant tp, which depends on the number of measurements, to give lp. Values of tp, corresponding to various percentages of confidence, are given in Table-1 as a function of n. 

1.1. REJECTION OF DOUBTFUL DATA: Sometimes, one of the measurements seems to be obviously very different from the others. In that case, it may be necessary to neglect this stray value. In order to judge whether such an omission is justified, one of the following accepted criteria should be employed. 1.1.1. 
USING THE STANDARD ERROR TO REJECT DOUBTFUL DATA: It can be shown that the probability that a measurement deviates by more than three standard deviations (3s) from the average is only 0.3%. Thus, by rejecting a datum whose deviation from the mean exceeds 3s, one can be (1-0.3) 99.7% confident that this omission is justified. 1.1.2. USING THE Q TEST TO REJECT DOUBTFUL DATA: The range, R, of a series of measurements is defined simply as the difference between the largest value measured and the smallest value measured: If a set of measurements includes a doubtful value, then the ratio, Qexpt, of the difference between the doubtful value and its nearest neighbor to the range, provides an objective criterion for rejecting the doubtful value. If 
Qexpt is given by then xdoubtful may be rejected if Qexpt is greater than Qcrit. Critical values of Q are listed in Table-2. 

Table-2 Rejection Quotients Based on the Range of a Set of Measurements (90% confidence) After rejecting a datum, the error calculations should be repeated for the new set of data not including the rejected datum. The test used for rejecting the datum should be indicated and the appropriate calculations leading to rejection should be shown clearly. 

1.2. REPORTING THE RESULT: The resulting value of any measurement performed n times should be reported in the form x±lp with p% confidence, 
accompanied by proper units and rounded to the proper number of significant digits. 

2. WHEN A VALUE IS OBTAINED FROM THE Y-INTERCEPT OR SLOPE OF A STRAIGHT LINE DRAWN THROUGH n EXPERIMENTAL 
DATA POINTS: 

2.1. DRAWING THE BEST STRAIGHT LINE THROUGH n EXPERIMENTAL DATA POINTS: The method of least squared error is used to obtain the equation of the best straight line through experimental data points. Let the equation of this line be y = A + Bx. Then, the values of the constants A and B are given by: 2.3.2. WHEN THE UNCERTAINTY IN x VALUES IS NOT NEGLIGIBLE: In situations where x values are suspected to carry appreciable uncertainty, a graphical method can be used to determine the uncertainties in the y-intercept and slope, if an estimate of the uncertainty in x values can be made. 
This method is based on drawing a rectangle of width 2l(xi) and height 2l(yi) around each data point, l(xi) and l(yi) denoting the error limits in the x and y values respectively, with the data point positioned at the center of this rectangle. Each rectangle represents the collection of points most likely to contain the ÒtrueÓ point. To find the uncertainty in the y-intercept, the two lines corresponding to the minimum and maximum y-intercept, one below and one above the best straight line obtained by the method of least squares and having the same slope as this best line, are drawn such that both lines pass through every rectangle. The uncertainty in the y-intercept is given by one-half the difference in the y-intercepts of these lines. To find the uncertainty in the slope, the two lines corresponding to the minimum and maximum slope are drawn similarly. This time the y-intercept of the lines is kept constant and equal to the y-intercept of the best line. The uncertainty in the slope is given by one-half the difference in the slopes of these lines. l(xi) and l(yi) should be estimated as realistically as 
possible. The limit of error in a given case depends on the characteristics and capabilities of the instrument used for measuring the value in question, the reproducibility of its reading, the quality of its calibration and other factors. l(yi) may be taken to be equal to sy if this is judged to be satisfactory. Note that this method allows the treatment of uncertainties that are not constant, but vary with the value of x or y itself. In general, measurements of length, volume or time have uncertainties that are constant with respect to the value measured. For example, the uncertainty in a volume read from a buret graduated in tenths of a milliliter is ± 0.05 mL, whether the actual reading is 1.20 mL or 41.20 mL. However, some instruments have inherent uncertainties expressed in terms of 
percentages of the actual readings. For example, a voltmeter reading may have an uncertainty of ±1% , which means that a reading of 1 V will have an uncertainty of ±0.01 V, while a reading of 100 V will have an uncertainty of ±1 V. Electrical instruments in general tend to have such percentage uncertainties. In addition to this very simple approximation, which could nevertheless be tedious when the number of points increases, this complication has been discussed by various authors [2,3,4,5,6] and computer programs have been developed to solve the problem more efficiently without sacrificing accuracy. 

2.4. CHECKING LINEARITY: In order to see whether experimental results confirm the theoretical prediction that there is a linear relation between two variables x and y, the coefficient of correlation, r, is a useful criterion: This coefficient may have values between 0 and 1. The value 0 coresponds to no 
correlation at all. This may be pictured as a totally random scatter of points which do not have any organization in the shape of a straight line. The other extreme is when all the data points lie on the best line, which corresponds to perfect correlation. In this case, the value of the correlation coefficient is 1. In order for this coefficient to be useful in deciding whether a linear relation exists at all, a dividing value must exist such that values lying above it are considered to define a linear relation, while values below it indicate that no such relation exists. To find this value, considering the relative error of the slope may be a valid approach. For example, in order to meet the requirement that the error in the slope does not exceed one third the value of the slope itself, it can be shown that [7]: |r| > 3/(n+7)1/2 The minimum value of |r| that satisfies this condition decreases as the number of observations increases. 

2.5. REPORTING THE RESULT: Any slope or y-intercept should be reported together with the associated uncertainty, and an explanation of how this uncertainty was determined. Furthermore, the correlation coefficient should be reported, accompanied by a statement of whether its value is a confirmation of a linear relation between the two variables studied, based on the above condition that the error in the slope does not exceed one third its actual value. Proper units should be given. 

3. THE ERROR IN A CALCULATED VALUE : PROPAGATION OF ERRORS: The final result of an experiment usually depends on several measured quantities, each one of which is subject to some error. Certain rules used for determining the overall error in such a result in terms of the individual errors of the components are summarized below. If q is the sum or difference, q = x + ... + z - (u + ... + w), then: 

If q is a product or quotient, q = (x x ... x z) / (u x ... x w), then: 

If q = Cx where C is known exactly, then: 

If q is a function of one variable, q(x), then: 

If q is a power, q = xn, then: 

If q is any function of several variables x, ... , z, then: 

3.1. REPORTING THE RESULT: Any calculated result should be given with its uncertainty calculated using the rules above. 

4. SIGNIFICANT FIGURES: The last significant figure in any reported result should usually be of the same order of magnitude (in the same decimal position) as the uncertainty. However, this rounding must be done in the end. Keeping one more significant figure than necessary during the calculations will reduce the error introduced by rounding the numbers. If the result is expressed in scientific notation, the uncertainty should also be expressed in the same form. For example, the result 

rate = 3.4 x 107 ± 2 x 106 cm3mol-1s-1 

would be simpler to understand if it were expressed in the form: 

rate = (3.4 ±0.2) x 106 cm3mol-1s-1 

5. COMPARISON OF THE RESULT WITH AN ACCEPTED VALUE: When a result is compared to an accepted value found in the literature, the result is said to be consistent with the accepted value if the difference between the accepted value and the evaluated result is less than the uncertainty of at least one of the values. If the same experimental technique has been used to obtain the accepted value, it is expected that the two values be consistent. If they are not found to be so, this points either to a systematic error or to the carelessness of the experimenter. When a different experimental technique has been used to obtain the accepted value, the two results are more likely to be inconsistent, since different sources of error, probably of different magnitudes, exist in the two experiments. A final word of caution: a result with a great amount of uncertainty is likely to be consistent with the accepted value. However, this does not mean that it is good. An acceptable result should have an uncertainty comparable in magnitude to the uncertainty in the accepted value and still be consistent with it. 
However, it is not easy to obtain such excellent results without specialized and accurately calibrated equipment. Being able to estimate reasonably the degree of accuracy of a result and discuss the possible sources of error is the most important piece of experience to be gained from error treatment. 

[1] Taylor, J.R., An Introduction to Error Analysis, University Science Books, California, 1982. 

[2] Christian, S.D., Lane, E.H., Garland, F., J. Chem. Educ., 51(7), 475, 1974. 

[3] Irvin, J.A., Quickenden, T.I., J. Chem. Educ., 60(9), 711, 1983. 

[4] Christian, S.D., Tucker, E.E., J. Chem. Educ., 61(9), 788, 1984. 

[5] Kalantar, A.H., J. Chem. Educ., 64(1), 28, 1987. 

[6] Ogren, P.J., Norton, J.R., J. Chem. Educ., 69(4), A130, 1992. 

[7] Barford, N.C., Experimental Measurements, Precision, Error and Truth, 2nd ed.,John Wiley & Sons, Chichester, 1985.

 © 2018 by Yüksel İnel / All rights reserved