Get premium membership and access questions with answers, video lessons as well as revision papers.

Ways of Assessing Reliability in Research

      

Ways of Assessing Reliability in Research

  

Answers


Faith
1. The Test-Retest technique
It involves administering the same instruments twice to the same group of subjects, but after some time. Stability reliability (sometimes called test, re-test reliability) is the agreement of measuring instruments over time. To determine stability, a measure or test is repeated on the same subjects at a future date. Results are compared and correlated with the initial test to give a measure of stability.

An example of stability reliability would be the method of maintaining weights used by the Kenya Bureau of Standards. Platinum objects of fixed weight (one kilogram, half kilogram, etc...) are kept locked away. Once a year they are taken out and weighed, allowing scales to be reset so they are "weighing" accurately. Keeping track of how much the scales are off from year to year establishes stability reliability for these instruments. In this instance, the platinum weights themselves are assumed to have a perfectly fixed stability reliability
Disadvantages
• Subjects may be sensitized by the first testing hence will do better in the second test
• Difficulty in establishing a reasonable period between the two testing sessions.

2. Equivalent form
Equivalent reliability is the extent to which two items measure identical concepts at an identical level of difficulty. Equivalency reliability is determined by relating two sets of test scores to one another to highlight the degree of relationship or association. In quantitative studies and particularly in experimental studies, a correlation coefficient, statistically referred to as r, is used to show the strength of the correlation between a dependent variable (the subject under study), and one or more independent variable, which are manipulated to determine effects on the dependent variable. An important consideration is that equivalency reliability is concerned with correlational, not causal, relationships.
For example, a researcher studying university Bachelor of commerce students happened to notice that when some students were studying for finals, their holiday shopping began. Intrigued by this, the researcher attempted to observe how often, or to what degree, these two behaviors co-occurred throughout the academic year. The researcher used the results of the observations to assess the correlation between studying throughout the academic year and shopping for gifts. The researcher concluded there was poor equivalency reliability between the two actions. In other words, studying was not a reliable predictor of shopping for gifts.
Two instruments are used. Specific items in each form are different but they are designed to measure the same concept. They are the same in number, structure and level of difficulty e.g. TOEFL, GRE

Advantages
• Estimates the stability of the data as well as the equivalence of the items in the two forms

Disadvantages
• Difficulty in constructing two tests, which measure the same concept (time and resources).

3. Internal consistency technique
Internal consistency is the extent to which tests or procedures assess the same characteristic, skill or quality. It is a measure of the precision between the observers or of the measuring instruments used in a study. This type of reliability often helps researchers interpret data and predict the value of scores and the limits of the relationship among variables.
For example, a researcher designs a questionnaire to find out about college students' dissatisfaction with a particular textbook. Analyzing the internal consistency of the survey items dealing with dissatisfaction will reveal the extent to which items on the questionnaire focus on the notion of dissatisfaction.

4. Interrater reliability
Interrater reliability is the extent to which two or more individuals (coders or raters) agree. Interrater reliability addresses the consistency of the implementation of a rating system.
A test of interrater reliability would be the following scenario: Two or more researchers are observing a high school classroom. The class is discussing a movie that they have just viewed as a group. The researchers have a sliding rating scale (1 being most positive, 5 being most negative) with which they are rating the student's oral responses. Interrater reliability assesses the consistency of how the rating system is implemented. For example, if one researcher gives a "1" to a student response, while another researcher gives a "5," obviously the interrater reliability would be inconsistent. Interrater reliability is dependent upon the ability of two or more individuals to be consistent. Training, education and monitoring skills can enhance interrater reliability.

Titany answered the question on October 21, 2021 at 12:58


Next: Causes of random error in measurement
Previous: Types of validity in Research

View More Research Methods Questions and Answers | Return to Questions Index


Learn High School English on YouTube

Related Questions