Search for notes by fellow students, in your own course and all over the country.

Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.

My Basket

You have nothing in your shopping cart yet.

Title: Reliability and Validity
Description: Differences between two for critique of journals

Document Preview

Extracts from the notes are below, to see the PDF you'll receive please use the links above


Reliability and Validity
Reliability
You will encounter this word on many occasions within the context of research, but what
does it really mean?
Think about how you use the word in everyday language- we often speak about machines
or people being "reliable" (or not, depending on their behaviour!)
...
In
this context we are using the word reliable to mean "dependable" or "trustworthy"
...

In research the term reliability relates to the "repeatability" or "consistency" of a measure
...

When a researcher develops a new measuring scale or tool e
...

No measurement is perfectly reliable but the more similar the results are with repeated
use the more reliable the scale is likely to be
...

There are many different techniques that can be used to determine how reliable a
measure is
...

Split-half reliability: This assesses the internal consistency
...
If the test is reliable there should be roughly the same scores on both
halves
Alternate-forms reliability: This measures the equivalence of the tool
...


If people tend to get the same score on the different forms of the test, these alternate
forms are said to be reliable
...
Principles and Methods
...
g observational methods, more than one observer may be
used to collect data independently
...
It is vital that the
equivalence, thus the reliability, of their data is assessed
...

Intra-rater reliability
Similar assessments can made of one individual radiologist's evaluation and reports to
check for the consistency and stability of his/her reporting technique
...


Validity
The second concept you will encounter in your studies is that of Validity
...

For instance if a researcher develops a scale to measure patient's perceived pain - how
does he/she know that the scores collected actually reflect the patient's pain and not
something else e
...

The reliability and validity of a measurement are not totally independent of each other
...

For example: if you were to develop a questionnaire to assess patients' satisfaction with
their care whilst undergoing diagnostic imaging or radiotherapy treatment, but most of the
questions related to their behaviours at home or their leisure time, patients, rightly, might
question what this had to do with their satisfaction with their health-care! i
...
the
questionnaire would lack face validity
...



Content Validity:

This refers to judgements about the extent to which the content of the instrument appears
to include and examine, in a logical, balanced and comprehensive way, the full scope of
what it is intended to measure
...
e
...



Criterion Validity:

Here a researcher will try to establish the relationship between his/her scale and some
other criterion measure which is accepted as valid e
...
(not
always easy as for many issues there are no such standards!)
For instance a researcher who develops an instrument aimed at measuring a patient's
anxiety and depression may correlate the outcomes against the signs and symptoms
used by clinicians to diagnose a patient's psychological status
...
g In the development of the Hospital Anxiety and Depression Scale (HAD Scale), which
is a well respected scale aimed at measuring these two psychological states, the scores
obtained were correlated against the clinicians evaluation (this may have included a
variety of clinical investigations) of the patient's condition and the subsequent diagnosis
...

This scale has since been used in numerous settings, which have reinforced its reliability
and validity and is an easy to use, well respected measuring tool used clinically and in
further research
...

Construct Validity
This is corroboration that the measuring tool is actually measuring the underlying
concept(s) it claims to measure
...
This
is just one approach to exploring construct validity
...
g developing an effective questionnaire is not a simple task, and requires effort and
patience on the part of the researcher
...

When researchers offer a new measuring scale for public evaluation/use they should
provide the evidence related to the development and evaluation of the reliability and
validity of the tool which would enable the user to assess its potential effectiveness in the
context of his/her own study i
...
The more
evidence a researcher can gather that an instrument is measuring what is it supposed to
be measuring, the more confidence we can have in its validity
...
For the most part these procedures can not be
exactly applied in a meaningful way to qualitative data
...

Many researchers seek to evaluate the quality of their findings using procedures that
have been outlined by Lincoln and Guba (1985) who proposed four main criteria to
establish the trustworthiness of the data
...

Lincoln and Guba (1985) suggest this criteria involves two aspects:


designing and conducting the research in such a way that the believability of the
findings is enhanced



demonstrating the credibility by use of various activities:
e
...
prolonged engagement- spending sufficient time in data collection
activities to have an in-depth understanding of the issues - essential
for building trust and rapport with subjects so that they are more likely
to provide honest, trustworthy data
b)
...
Can relate to using more than one method of data
collection e
...
using an interview followed by observation of subject's
behaviour, to check for corroboration, using more than one level of
person i
...
collecting data from individual, family, friends,
organisations etc
...

c)
...
From a qualitative
perspective transferability is primarily the responsibility of the one doing the generalising
i
...
the person who is interested in making a transfer needs to be provided with sufficient

evidence about the data in order to be able to make a conclusion and justify his/her
decision to generalise
...

• Dependability
The quantitative perspective of reliability is based on the assumption of repeatability (see
above)
...
This emphasises the need for researchers to account for the everchanging context within which the research occurs, describing the changes that occur in
the setting and how these changes affected the way the research was approached
...



Confirmability

Qualitative research tends to assume that each researcher brings a unique perspective to
the study
...
Inquiry audits
can help to establish dependability and confirmability of data
...
Some proponents appear
to argue that correct reading of the quantitative criteria shows they can be equally applied
to the qualitative situation
...
There doesn't, however, appear to be a convincing, comprehensive
explanation of how to translate these processes between the two different perspectives!
However what is essential is to keep these two concepts in mind when evaluating the
findings of others or designing and conducting your own study
...
One must also question the ethics of such a study
Title: Reliability and Validity
Description: Differences between two for critique of journals