2. The Research Design
1 The Research Design
At the conclusion of this topic you should be able to:
• Differentiate between the different types of research and be able to discuss their advantages and disadvantages with the view of being able to recommend a type of research for a particular research problem
• Write a research question that is articulate and captures the issue being investigated
• Discriminate between a research question and a research hypothesis and be able to accurately determine when each is required or more appropriate
• Discuss the different research design and method options for researchers and be able to correctly recommend a design and method to suit a specific context and problem
• Identify and discuss the ethical issues inherent in business research and propose solutions and recommendations to ensure that research projects are conducted ethically and reliably
Research Methodology and design
It is time to examine and study research methodology. A research methodology outlines the strategy for conducting an investigation in order to answer a research question. As a part of an overall research project, the researcher will need to plan out and share the procedures that will be used in the investigation. In this section you will review different approaches, designs, procedures, and methods for investigating your area of research. Specific tools will be described and evaluated so that you can determine which ones will help you to meet your particular research goals.
The Research Design
The overall design of a research project consists of its methods and procedures. Research design can be described as Qualitative or Quantitative in approach. It is also possible to have a mixture of the two approaches, both in overall design and in the specific methods used in the investigation. How you would choose between these types of research is covered in the next chapters but in the meantime Watch this.
In the previous Moodle Book we mentioned the three main types of business research:
1. Exploratory,
2. Descriptive, and
3 Casual.
Let's explore these in more detail.
Exploratory research
This type of research is generally conducted to clarify ambiguous situations or to discover potential business opportunities. As the name implies this sort of research is NOT intended to provide conclusive evidence on which to make decisions. Therefore this form of research is often used as a first step in a research design, conducted to understand issues or to allow the researchers to gain a greater insight into the phenomena they are investigating. This type of research then guides and informs subsequent research efforts. Exploratory research is particularly useful for product development questions and for market testing of concepts.
Examples of methods that you might use when conducting exploratory research include:
• Previous research (secondary data)- investigate published work that discusses previous research and publishes results, examples include literature reviews, reports, publicly available data such as census data and ASIC data;
• Pilot studies - these are small scale research projects that are designed to collect data from respondents similar to those to be used in a larger study to test the accuracy of data collection instruments, usefulness of questions and data analysis methods to be used; and
• Focus groups - this is a qualitative method where a small group of people who discuss a research topic lead by a moderator who guides and records the discussion.
Descriptive Research
This type of research is used to describe the characteristics of objects, people, groups, organisations or environments. In other words this type of research tries to paint a picture of a given situation by addressing questions about who, what, when , where and how. Unlike exploratory research, descriptive studies are conducted after the researcher has gained a firm grasp of the situation being studied. Therefore exploratory research is often used to refine and direct descriptive research questions and hypotheses. Descriptive research is then often used to describe market segments and indeed the Bureau of Statistics in most countries does this with its regular census data - explaining characteristics of the population both in terms of their demographics and their attitudes and preferences. Accuracy is critical in descriptive research because if inaccurate data is used to describe or predict a market opportunity this could be potentially disastrous.
Examples of methods most often used for descriptive research include:
1. Secondary data analysis - such as population statistics,
2. Surveys that are designed to answer specific questions, and
3. Diagnostic analysis - this type of research focuses on beliefs and feelings consumers have about and toward competing products.
Causal Research
Casual research seeks to identify cause-and-effect relationships between specific variables. That is, it aims to answer the question of when something causes an effect (change or behaviour). Because of the specificity of this type of research it is often conducted after both exploratory and descriptive studies. Examples of this sort of research would be questions about what impacts stock market prices, what factors immediately impact employee satisfaction, and how will price changes impact customer demand. Causal research projects take a longer time to complete and can be expensive and it is therefore critical to have considerable knowledge of the situation and the questions that need to be asked.
It is very important when conducting this type of research to be aware of reactions that look to be causal but in reality are just spurious associations. For example we know that there is a strong positive correlation between ice-cream sales and murder rates, such that when ice cream sales decline there is also an decrease in the murder rate. Should we therefore conclude that people become murders when they eat ice-cream? This is an example of a spurious association where the cause and effect is due to the impact of another variable not included in the study. In this case we know that ice-cream sales increase in hot weather and co-incidentally murder rates increase when the weather is hot and people are more active and outdoors particularly in the evening. When the weather cools, ice-cream sales decline, people stay indoors more and are less active and murder rates also drop.
Examples of methods used most often for causal research include:
1. Experiments,
2. Surveys, and
3. Test marketing.
This table summarizes the different types of research and how the level of uncertainty will dictate to a researcher the best type to use as well as the types of questions answered by these forms of research.
There are some additional readings available for this topic. Go there now and have a read in more detail about research designs and methods.
This diagram is also a useful summary of the different research designs available to researchers.
2 Qualitative vs Quantitative met
All researchers, including you, need to understand the full nature of both quantitative and qualitative approaches to research and evaluation methodologies in order to appropriately select the overall design that best fits your investigation. While described as distinct terms, qualitative and quantitative approaches to research methods and design are complementary and can overlap often. Watch this helpfulYoutube video to find out more about the differences in these methods.
Both qualitative and quantitative methodologies have their place in a research design and the choice of method largely depends on the questions to be answered and the degree of uncertainty or ambiguity surrounding the research. Most good research projects combine the two methods. For example in order to develop a good survey instrument it is important to have a deep understanding of the concept being investigated and the right questions to ask people. Qualitative research conducted first can address some of this concerns and result in a more targeted and effective qualitative study. Correspondingly a quantitative study might yield results that are difficult to explain or understand. Qualitative research conducted to explore these findings can ad a richness of understanding that pure numbers can't provide. For research that needs to be generalisable, replicated and objective, then qualitative methods are required. Where the research can be more subjective, fluid and inductive then qualitative methods are appropriate. The following tables provide a good summary of when each type of research can and should be used. The next chapters look specifically at each type of research in more detail.
.1 Qualitative methods
Qualitative research methods
To summarise, qualitative methodology is inductive in its reasoning. The researcher selects a general topic and then begins collecting information to assist in the formation of an hypothesis. The data collected during the investigation creates thehypothesis for the researcher in this research design model. Whilst in contrast quantitative methods are deductive in their reasoning. The research investigates a phenomena of interest with a specific hypothesis to guide or direct the work. The data that is collected is then subjected to statistical analysis to determine the answers.
So in business research terms, the less specific the research objective and question the more likely it will be that a qualitative approach will be appropriate. Also if the emphasis is on a deeper understanding of motivations of customers or staff or on developing new concepts, then qualitative methods are most appropriate. In summary there are 5 reasons when qualitative methods are most appropriate. These are:
1. When it is difficult to develop specific and actionable problem statements or objectives for the research;
2. When the research objective is to gain further insight or understanding of an issue in more depth;
3. When the research objective is to learn how a phenomena occurs in its natural setting or to learn how to express some concept in colloquial terms;
4. When some behaviour being researches is particularly context dependent - meaning the reasons something is like or some behaviour is performed depends on the particular situation surrounding the event; and
5. when a fresh approach to studying some problem is needed.
There are a number of common approaches to conduct qualitative research and these are:
1. phenomenology - originating in philosophy and looks at studying human experiences based on the idea that human experience itself is subjective and determined by the context in which people live;
2. ethnography - originating in anthropology and represents ways of studying cultures through methods that involve becoming highly engaged with and embedded into that culture;
3. grounded theory - originating in sociology and is an inductive process where the researcher repeatedly poses questions about information provided to derive deeper explanations , and
4. case studies -originating in psychology and business where documented history of a particular person or event provides information for further analysis.
Within these approaches there are a number of commonly used qualitative research tools. The following table provides some of the advantages and disadvantages of each.
The following links also provide useful information about the advantages and disadvantages of some of these methods.
Observation -http://www.bcps.org/offices/lis/researchcourse/develop_observation.html
Focus groups - http://www.bcps.org/offices/lis/researchcourse/develop_focus.html
Case studies - http://www.bcps.org/offices/lis/researchcourse/develop_case.html
Qualitative research can also be done online and the following link provides some good resources of the common tools for this. http://www.qualitative-research.net/index.php/fqs/article/view/594/1289
For qualitative methods or research design then the basic methodology is as follows:
1. Identify a general research question.
2. Choose main methods, sites, and subjects for research. Determine methods of documentation of data and access to subjects.
3. Decide what you will collect data on: questions, behaviors to observe, issues to look for in documents (interview/observation guide), how much (# of questions, # of interviews/observations, etc.).
4. Clarify your role as researcher. Determine whether you will be obtrusive or unobtrusive, objective or involved.
5. Study the ethical implications of the study. Consider issues of confidentiality and sensitivity.
6. Begin to collect data and continue until you begin to see the same, repeated information, and stop finding new information.
7. Interpret data. Look for concepts and theories in what has been collected so far.
8. Revise the research question if necessary and begin to form hypotheses.
9. Collect further data to address revisions. Repeat Steps 6 and 7.
10. Verify your data. Complete conceptual and theoretical work to make your findings. Present your findings in an appropriate form to your audience.
Activity
Have you participated in any qualitative research either in your workplace or as a customer? Was it clear what the purpose of the research was? Do you think the researcher collected the information they were searching for? Could a quantitative method have been used just as effectively?
.2 Quantitative Methods
Quantitative research methods
Quantitative research is research that addresses research objectives through empirical assessments that involve numerical measurement and analysis. This form of research is more able to stand on its own in the sense that it does not require interpretation for others to reach the same conclusions. Quantitative methods use approved and tested scales to measure concepts and to provide numerical values to represent these. These numerical values can then be used in statistical analysis and computations to test hypotheses. As we saw in the last chapter that qualitative researchers are more interested in observing, listening and interpreting what people say and do. This is why quantitative research is considered to be objective and not dependant on the individual researcher's interpretations or biases. Thinking back to the previous chapters on research designs, quantitative research is most likely to be used for casual and descriptive research purposes not for exploratory (though it can provide some insights here).
For quantitative methods or designs the basic method is as follows:
The overall structure for a quantitative design is based in the scientific method. It uses deductive reasoning, where the researcher forms an hypothesis, collects data in an investigation of the problem, and then uses the data from the investigation, after analysis is made and conclusions are shared, to prove the hypotheses not false or false. The basic procedure of a quantitative design is:
1. Make your observations about something that is unknown, unexplained, or new. Investigate current theory surrounding your problem or issue.
2. Hypothesize an explanation for those observations.
3. Make a prediction of outcomes based on your hypotheses. Formulate a plan to test your prediction.
4. Collect and process your data. If your prediction was correct, go to step 5. If not, the hypothesis has been proven false. Return to step 2 to form a new hypothesis based on your new knowledge.
5. Verify your findings. Make your final conclusions. Present your findings in an appropriate form for your audience.
This table showing different types of quantitative designs might be useful.
Types of Quantitative Design
Descriptive research seeks to describe the current status of an identified variable. These research projects are designed to provide systematic information about a phenomenon. The researcher does not usually begin with an hypothesis, but is likely to develop one after collecting data. The analysis and synthesis of the data provide the test of the hypothesis. Systematic collection of information requires careful selection of the units studied and careful measurement of each variable.
Examples of Descriptive Research:
• A description of how second-grade students spend their time during summer vacation
• A description of the tobacco use habits of teenagers
• A description of how parents feel about the twelve-month school year
• A description of the attitudes of scientists regarding global warming
• A description of the kinds of physical activities that typically occur in nursing homes, and how frequently each occurs
• A description of the extent to which elementary teachers use math manipulatives Correlational research attempts to determine the extent of a relationship between two or more variables using statistical data. In this type of design, relationships between and among a number of facts are sought and interpreted. This type of research will recognize trends and patterns in data, but it does not go so far in its analysis to prove causes for these observed patterns. Cause and effect is not the basis of this type of observational research. The data, relationships, and distributions of variables are studied only. Variables are not manipulated; they are only identified and are studied as they occur in a natural setting.
*Sometimes correlational research is considered a type of descriptive research, and not as its own type of research, as no variables are manipulated in the study.
Examples of Correlational Research:
• The relationship between intelligence and self-esteem
• The relationship between diet and anxiety
• The relationship between an aptitude test and success in an algebra course
• The relationship between ACT scores and the freshman grades
• The relationships between the types of activities used in math classrooms and student achievement
• The covariance of smoking and lung disease Causal-comparative/quasi-experimental research attempts to establish cause-effect relationships among the variables. These types of design are very similar to true experiments, but with some key differences. An independent variable is identified but not manipulated by the experimenter, and effects of the independent variable on the dependent variable are measured. The researcher does not randomly assign groups and must use ones that are naturally formed or pre-existing groups. Identified control groups exposed to the treatment variable are studied and compared to groups who are not.
When analyses and conclusions are made, determining causes must be done carefully, as other variables, both known and unknown, could still affect the outcome. A causal-comparative designed study, described in a New York Times article, "The Case for $320,00 Kindergarten Teachers," illustrates how causation must be thoroughly assessed before firm relationships amongst variables can be made.
Examples of Correlational Research:
• The effect of preschool attendance on social maturity at the end of the first grade
• The effect of taking multivitamins on a students’ school absenteeism
• The effect of gender on algebra achievement
• The effect of part-time employment on the achievement of high school students
• The effect of magnet school participation on student attitude
• The effect of age on lung capacity Experimental research, often called true experimentation, uses the scientific method to establish the cause-effect relationship among a group of variables that make up a study. The true experiment is often thought of as a laboratory study, but this is not always the case; a laboratory setting has nothing to do with it. A true experiment is any study where an effort is made to identify and impose control over all other variables except one. An independent variable is manipulated to determine the effects on the dependent variables. Subjects are randomly assigned to experimental treatments rather than identified in naturally occurring groups
Examples of Experimental Research:
• The effect of a new treatment plan on breast cancer
• The effect of positive reinforcement on attitude toward school
• The effect of teaching with a cooperative group strategy or a traditional lecture approach on students’ achievement
• The effect of a systematic preparation and support system on children who were scheduled for surgery on the amount of psychological upset and cooperation
• A comparison of the effect of personalized instruction vs. traditional instruction on computational skill
Here are some helpful links that will provide you with more information about the various tools to use in a quantitative study.
Questionnaires/surveys -http://www.bcps.org/offices/lis/researchcourse/develop_questionaire.html
Interviews -http://www.bcps.org/offices/lis/researchcourse/develop_interviews.html#procon
Questions or Hypotheses?
Questions or Hypotheses
Questions and hypotheses are testable explanations that are proposed before themethodology of a project is conducted, but after the researcher has had an opportunity to develop background knowledge. Although research questions and hypotheses are different in their sentence structure and purpose, both seek to predict relationships. Deciding whether to use questions or hypothesis depends on facts such as the purpose of the study, the approach and design of the methodology, and the expected audience for the research. Here is a nice video example that might also help.
A research question proposes a relationship between two or more variables. Just as the title states, it is structured in form of a question. There are three types of research questions:
• A descriptive research question seeks to identify and describe some phenomenon. An example: What is the ethnic breakdown of patients seen in the emergency room for non- emergency conditions?
• A differences research question asks if there are differences between groups on some phenomenon. For example: Do patients who receive massage experience more relief from sore muscle pain than patients who take a hot bath?
• A relationship question asks if two or more phenomena are related in some systematic manner. For example: If one increases his level of physical exercise does muscle mass also increase?
A hypothesis represents a declarative statement, a sentence instead of a question, of the cause-effect relationship between two or more variables. When writing an hypothesis it is important to make a clear and careful distinction between the dependent and independent variables and to be certain they are clear to the reader.Be very consistent in your use of terms. If appropriate, use the same pattern of wording and word order in all hypotheses.
So if you are using hypotheses you need to know:
1. What is a variable? - Answer: A variable is an object, event, idea, feeling, time period, or any other type of category you are trying to measure. There are two types of variables-independent and dependent.
2. What is an independent variable? - Answer: An independent variable is exactly what it sounds like. It is a variable that stands alone and isn't changed by the other variables you are trying to measure. For example, someone's age might be an independent variable. Other factors (such as what they eat, how much they go to school, how much television they watch) aren't going to change a person's age. In fact, when you are looking for some kind of relationship between variables you are trying to see if the independent variable causes some kind of change in the other variables, or dependent variables.
3. What is a dependent variable? - Answer: Just like an independent variable, a dependent variable is exactly what it sounds like. It is something that depends on other factors. For example, a test score could be a dependent variable because it could change depending on several factors such as how much you studied, how much sleep you got the night before you took the test, or even how hungry you were when you took it. Usually when you are looking for a relationship between two things you are trying to find out what makes the dependent variable change the way it does.
Many people have trouble remembering which is the independent variable and which is the dependent variable. An easy way to remember is to insert the names of the two variables you are using in your hypotheses sentence in they way that makes the most sense. Then you can figure out which is the independent variable and which is the dependent variable. For example, (Independent variable) causes a change in (Dependent Variable) and it isn't possible that (Dependent Variable) could cause a change in (Independent Variable). Time Spent Studying (independent variable) causes a change in Test Score (dependent variable) and it isn't possible that the Test Score (dependent variable) could cause a change in the Time Spent Studying (independent variable).
We see that "Time Spent Studying" must be the independent variable and "Test Score" must be the dependent variable because the sentence doesn't make sense the other way around.
Some hints for writing good hypotheses.
Strong hypotheses:
• Give insight into a research question;
• Are testable and measurable by the proposed experiments;
• Spring logically from the experience of the staff;
Normally, no more than three primary hypotheses should be proposed for a research study.
Make sure you:
• Provide a rationale for your hypotheses — where did they come from, and why are they strong?
• Provide alternative possibilities for the hypotheses that could be tested — why did you choose the ones you did over others?
While hypotheses come from the scientific method, suppose that we asked "How are presidential elections affected by economic conditions?" We could formulate this question into the following hypothesis: "When the national unemployment rate is greater than 7 percent at the time of the election, presidential incumbents are not re-elected."
Hypotheses can be created as four kinds of statements.
1. Literary null — a “no difference” form in terms of theoretical constructs. For example, “There is no relationship between support services and academic persistence of nontraditional-aged college women.” Or, “There is no difference in school achievement for high and low self-regulated students.”
2. Operational null — a “no difference” form in terms of the operation required to test the hypothesis. For example, “There is no relationship between the number of hours nontraditional-aged college women use the student union and their persistence at the college after their freshman year.” Or, “There is no difference between the mean grade point averages achieved by students in the upper and lower quartiles of the distribution of the Self-regulated Inventory.”
The operational null is the most used form for hypothesis-writing.
3. Literary alternative — a form that states the hypothesis you will accept if the null hypothesis is rejected, stated in terms of theoretical constructs. In other words, this is usually what you hope the results will show.For example, “The more that nontraditional-aged women use support services, the more they will persist academically.” Or, “High self-regulated students will achieve more in their classes than low self-regulated students.”
4. Operational alternative — Similar to the literary alternative except that the operations are specified. For example, “The more that nontraditional-aged college women use the student union, the more they will persist at the college after their freshman year.” Or, “Students in the upper quartile of the Self-regulated inventory distribution achieve significantly higher grade point averages than do students in the lower quartile.”
Regardless of which is selected, questions or hypotheses, this element of the research design needs to be as specific as possible. It should be realistic and feasible, and be formulated with time and resource constraints in mind.
If you have good hypotheses, they will lead into your Specific Aims. Specific aims are the steps you are going to take to test your hypotheses and what you want to accomplish in the research program. Make sure:
• Your objectives are measurable and highly focused;
• Each hypothesis is matched with a specific aim.
• The aims are feasible, given the time and money you are requesting for the research.
REFLECTION: Identify some of the Questions or Hypothesis within studies you have read or had commissioned in your workplace. How do you think that the researchers were able to determine these were sound propositions to make? Are there things that you disagreed with in the questions or hypothesis, or that you would do differently? What did you learn from these readings and studies that might be helpful when you write your own research plan in assignment 2?
4 Goals and Objectives
Goals and Objectives
The words goal and objective are often confused with each other. They both describe things that a person may want to achieve or attain; however, each is different in its scope. Goals are more global in nature, affecting larger populations over longer time frames. They are the big vision and are more general in wording. Objectives are more specific and defined in nature. They are time-related to achieve a certain task, and are the measurable outcomes of activities undertaken to achieve goals; they are described as achieved or not achieved. Objectives should align with a study’s goals.
The following chart can help you in determine whether a statement that you have written is a goal or an objective.
Goal Objective
What is the meaning of the statement? The purpose toward which an investigation is directed. Something that one's efforts or actions are intended to attain or accomplish; purpose; target
What is the time frame of the statement? Long term Short term
How would you measure the action described in the statement? Cannot be measured Can be measured
What is the type of outcome of the action described in the statement? Intangible Tangible
What kind of action is described in the statement? Generic action Specific action
What overall plan is the statement describing? Broad plan Narrow plan
Statement example The after-school program will help children read better. The after-school remedial education program will assist 50 children in improving their reading scores by one grade level as demonstrated on standardized reading tests administered after participating in the program for six months.
A strong research idea should pass the “so what” test. Think about the potential impact of the research you are proposing. What is the benefit of answering your research question? Who will it help (and how)? Keep the research focused to the questions that need to be answered and don't pad out the research with "nice to know", but not essential questions. This will just fatigue your respondents and dilute the work that needs to be done.
A research focus should be narrow, not broad-based. For example, “What can be done to prevent substance abuse?” is too large a question to answer. It would be better to begin with a more focused question such as “What is the relationship between specific early childhood experiences and subsequent substance-abusing behaviors?” A well-thought-out and focused research question leads directly into your hypotheses. What predictions would you make about the phenomenon you are examining?
REFLECTION: Which do you think are easier to craft, goals or objectives? Why?
5 Validity and Reliability
Validity and Reliability of research
Regardless of the design and method you choose you still need to be able to convince your reader that your methods and results are both reliable and valid. The moreresults prove consistent over time and reflect accurate representations of the total populations under study, the more scientifically reliable they are. If the results of a study can be reproduced under a similar methodology, then the research methodsare considered to be reliable. So in summary validity refers to the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. While reliability is concerned with the accuracy of the actual measuring instrument or procedure. Each type of research design has its own standards for reliability and validity.
Watch this Youtube video which explains these terms.
Validity
Validity determines whether the research truly measures what it was intended to measure, or how truthful the research results are. In other words, does the research instrument allow you to hit "the bull’s eye" of your research objectives? Researchers generally determine validity by asking a series of questions, and will often look for the answers in the research of others. Researchers are concerned with both external andinternal validity. External validity refers to the extent to which the results of a study are generalizable or transferable. (Most discussions of external validity focus solely on generalizability or in the case of qualitative research transferability because many qualitative research studies are not designed to be generalized).
Internal validity refers to:
1. the rigor with which the study was conducted (e.g., the study's design, the care taken to conduct measurements, and decisions concerning what was and wasn't measured); and
2. the extent to which the designers of a study have taken into account alternative explanations for any causal relationships they explore. In studies that do not explore causal relationships, only the first of these definitions should be considered when assessing internal validity.
Scholars discuss several types of internal validity the most common of which are; face validity, criterion related validity, construct validity and content validity.
Face Validity - Face validity is concerned with how a measure or procedure appears. Does it seem like a reasonable way to gain the information the researchers are attempting to obtain? Does it seem well designed? Does it seem as though it will work reliably? Unlike content validity, face validity does not depend on established theories for support.
Criterion Related Validity - Criterion related validity, also referred to as instrumental validity, is used to demonstrate the accuracy of a measure or procedure by comparing it with another measure or procedure which has been demonstrated to be valid. For example, imagine a hands-on driving test has been shown to be an accurate test of driving skills. By comparing the scores on the written driving test with the scores from the hands-on driving test, the written test can be validated by using a criterion related strategy in which the hands-on driving test is compared to the written test.
Construct Validity - Construct validity seeks agreement between a theoretical concept and a specific measuring device or procedure. For example, a researcher inventing a new IQ test might spend a great deal of time attempting to "define" intelligence in order to reach an acceptable level of construct validity. Construct validity can be broken down into two sub-categories: Convergent validity and discriminate validity. Convergent validity is the actual general agreement among ratings, gathered independently of one another, where measures should be theoretically related. Discriminate validity is the lack of a relationship among measures which theoretically should not be related. To understand whether a piece of research has construct validity, three steps should be followed. First, the theoretical relationships must be specified. Second, the empirical relationships between the measures of the concepts must be examined. Third, the empirical evidence must be interpreted in terms of how it clarifies the construct validity of the particular measure being tested.
Content Validity - Content Validity is based on the extent to which a measurement reflects the specific intended domain of content. Content validity is illustrated using the following examples: Researchers aim to study mathematical learning and create a survey to test for mathematical skill. If these researchers only tested for multiplication and then drew conclusions from that survey, their study would not show content validity because it excludes other mathematical functions. Although the establishment of content validity for placement-type exams seems relatively straight-forward, the process becomes more complex as it moves into the more abstract domain of socio-cultural studies. For example, a researcher needing to measure an attitude like self-esteem must decide what constitutes a relevant domain of content for that attitude. For socio-cultural studies, content validity forces the researchers to define the very domains they are attempting to study.
Validity Example
Many recreational activities of high school students involve driving cars. A researcher, wanting to measure whether recreational activities have a negative effect on grade point average in high school students, might conduct a survey asking how many students drive to school and then attempt to find a correlation between these two factors. Because many students might use their cars for purposes other than or in addition to recreation (e.g., driving to work after school, driving to school rather than walking or taking a bus), this research study might prove invalid. Even if a strong correlation was found between driving and grade point average, driving to school in and of itself would seem to be an invalid measure of recreational activity.
Reliability
Reliability is the extent to which an experiment, test, or any measuring procedure yields the same result on repeated trials. Without the agreement of independent observers able to replicate research procedures, or the ability to use research tools and procedures that yield consistent measurements, researchers would be unable to satisfactorily draw conclusions, formulate theories, or make claims about thegeneralizability of their research. In addition to its important role in research, reliability is critical for many parts of our lives, including manufacturing, medicine, and sports. Reliability is such an important concept that it has been defined in terms of its application to a wide range of activities. For researchers, four key types of reliability are:
Equivalency Reliability
Equivalency reliability is the extent to which two items measure identical concepts at an identical level of difficulty. Equivalency reliability is determined by relating two sets of test scores to one another to highlight the degree of relationship or association. In quantitative studies and particularly in experimental studies, a correlation coefficient, statistically referred to as r, is used to show the strength of the correlation between adependent variable (the subject under study), and one or more independent variables, which are manipulated to determine effects on the dependent variable. An important consideration is that equivalency reliability is concerned with correlational, not causal, relationships.
For example, a researcher studying university English students happened to notice that when some students were studying for finals, their holiday shopping began. Intrigued by this, the researcher attempted to observe how often, or to what degree, this these two behaviors co-occurred throughout the academic year. The researcher used the results of the observations to assess the correlation between studying throughout the academic year and shopping for gifts. The researcher concluded there was poor equivalency reliability between the two actions. In other words, studying was not a reliable predictor of shopping for gifts.
Stability Reliability
Stability reliability (sometimes called test, re-test reliability) is the agreement of measuring instruments over time. To determine stability, a measure or test is repeated on the same subjects at a future date. Results are compared and correlated with the initial test to give a measure of stability.
An example of stability reliability would be the method of maintaining weights used by the U.S. Bureau of Standards. Platinum objects of fixed weight (one kilogram, one pound, etc...) are kept locked away. Once a year they are taken out and weighed, allowing scales to be reset so they are "weighing" accurately. Keeping track of how much the scales are off from year to year establishes a stability reliability for these instruments. In this instance, the platinum weights themselves are assumed to have a perfectly fixed stability reliability.
Internal Consistency
Internal consistency is the extent to which tests or procedures assess the same characteristic, skill or quality. It is a measure of the precision between the observers or of the measuring instruments used in a study. This type of reliability often helps researchers interpret data and predict the value of scores and the limits of the relationship among variables.
For example, a researcher designs a questionnaire to find out about college students' dissatisfaction with a particular textbook. Analyzing the internal consistency of the survey items dealing with dissatisfaction will reveal the extent to which items on the questionnaire focus on the notion of dissatisfaction.
Interrater Reliability
Interrater reliability is the extent to which two or more individuals (coders or raters) agree. Interrater reliability addresses the consistency of the implementation of a rating system.
A test of interrater reliability would be the following scenario: Two or more researchers are observing a high school classroom. The class is discussing a movie that they have just viewed as a group. The researchers have a sliding rating scale (1 being most positive, 5 being most negative) with which they are rating the student's oral responses. Interrater reliability assesses the consistency of how the rating system is implemented. For example, if one researcher gives a "1" to a student response, while another researcher gives a "5," obviously the interrater reliability would be inconsistent. Interrater reliability is dependent upon the ability of two or more individuals to be consistent. Training, education and monitoring skills can enhance interrater reliability.
Reliability Example
An example of the importance of reliability is the use of measuring devices in Olympic track and field events. For the vast majority of people, ordinary measuring rulers and their degree of accuracy are reliable enough. However, for an Olympic event, such as the discus throw, the slightest variation in a measuring device -- whether it is a tape, clock, or other device -- could mean the difference between the gold and silver medals. Additionally, it could mean the difference between a new world record and outright failure to qualify for an event. Olympic measuring devices, then, must be reliable from one throw or race to another and from one competition to another. They must also be reliable when used in different parts of the world, as temperature, air pressure, humidity, interpretation, or other variables might affect their readings.
2. The Research Design
6 Research Ethics
Research Ethics
Ethical codes and guidelines are a means of establishing and articulating the values of a particular institution or society, and the obligations that it expects people engaged in certain practices to abide by. Research ethics involve requirements about the conduct of research, the protection of dignity and health of subjects and the publication of the information from the research.
Over the years there have been some famous and some less-well-known examples of research ethics scandal's that have ultimately led to the establishment of the present system of independent ethics committees reviewing research. It should be noted that while the history of research ethics is often assumed to have begun with the scandals that took place in Nazi Germany, both unethical research and ethical regulation of research preceded those events. Human experimentation has been conducted even before 18th century. However, the ethical attitudes of researchers drawn the interest of society only after 1940's because of human exploitation in several cases. Professional codes and laws were introduced since then in order to prevent scientific abuses of human lives. Whilst the Nazi experiments led to the Nuremberg Code (1947) which was the leading code for all subsequent codes made to protect human rights in research. This code focuses on voluntary informed consent, liberty of withdrawal from research, protection from physical and mental harm, or suffering and death. It also emphasizes the risk- benefit balance. The only weak point of this code was the self regulation of researchers which can be abused in some research studies. All declarations followed, forbade nontherapeutic research. It was only in 1964 with the declaration of Helsinki that the need for non therapeutic research was initiated. The declaration emphasised the protection of subjects in this kind of research and strongly proclaimed that the well being of individuals is more important than scientific and social interests.
Download this book about research ethics. The cases discussed in chapter 1 all lead in various ways to the development of codes of ethics for researchers. Some of these studies produced great leaps forward in medical thinking even though people were harmed. In particular lets review the famous Milgram Experiments (1961 - 1963) on page 16. Here two more articles about the reliability and validity of the experiments. Go now to the discussion forum and let's use this case to discuss the ethical issues. Students with classroom teaching can do this in class.
There are several reasons why it is important to adhere to ethical norms in research. First, norms promote the aims of research, such as knowledge, truth, and avoidance of error. For example, prohibitions against fabricating, falsifying, or misrepresenting research data promote the truth and avoid error. Second, since research often involves a great deal of cooperation and coordination among many different people in different disciplines and institutions, ethical standards promote thevalues that are essential to collaborative work, such as trust, accountability, mutual respect, and fairness. For example, many ethical norms in research, such as guidelines for authorship, copyright and patenting policies, data sharing policies, and confidentiality rules in peer review, are designed to protect intellectual property interests while encouraging collaboration. Most researchers want to receive credit for their contributions and do not want to have their ideas stolen or disclosed prematurely. Third, many of the ethical norms help to ensure that researchers can be held accountable to the public. For instance, federal policies on research misconduct, conflicts of interest, the human subjects protections, and animal care and use are necessary in order to make sure that researchers who are funded by public money can be held accountable to the public. Fourth, ethical norms in research also help to build public support for research. People more likely to fund research project if they can trust the quality and integrity of research. Finally, many of the norms of research promote a variety of other important moral and social values, such as social responsibility, human rights, animal welfare, compliance with the law, and health and safety. Ethical lapses in research can significantly harm human and animal subjects, students, and the public. For example, a researcher who fabricates data in a clinical trial may harm or even kill patients, and a researcher who fails to abide by regulations and guidelines relating to radiation or biological safety may jeopardize his health and safety or the health and safety of staff and students.
Codes and Policies for Research Ethics
Given the importance of ethics for the conduct of research, it should come as no surprise that many different professional associations, government agencies, and universities have adopted specific codes, rules, and policies relating to research ethics. Many government agencies, such as the National Institutes of Health (NIH), the National Science Foundation (NSF), the Food and Drug Administration (FDA), the Environmental Protection Agency (EPA), and the US Department of Agriculture (USDA) have ethics rules for funded researchers. Other influential research ethics policies include the Uniform Requirements for Manuscripts Submitted to Biomedical Journals (International Committee of Medical Journal Editors), the Chemist's Code of Conduct (American Chemical Society), Code of Ethics (American Society for Clinical Laboratory Science) Ethical Principles of Psychologists (American Psychological Association), Statements on Ethics and Professional Responsibility (American Anthropological Association), Statement on Professional Ethics (American Association of University Professors), the Nuremberg Code and the Declaration of Helsinki (World Medical Association).
The following is a rough and general summary of some ethical principals that various codes address:
Professional Conduct
Issues such as striving for honesty, objectivity and integrity in all scientific communications, research designs and protocols. This means that the researchers need to honestly report data, results, methods and procedures, and publication status. Do not fabricate, falsify, or misrepresent data. Do not deceive colleagues, granting agencies, or the public. Strive to avoid bias in design and data analysis and interpretation and all aspects of the research. Avoid or minimize bias or self-deception. Disclose personal or financial interests that may affect research.
Informed consent
Informed consent is the major ethical issue in conducting research and it means that a person knowingly, voluntarily and intelligently, and in a clear and manifest way, gives their consent to participate in the research. Informed consent is one of the means by which a patient's right to autonomy is protected and it seeks to incorporate the rights of autonomous individuals through self- determination. It also seeks to prevent assaults on the integrity of the patient and protect personal liberty and veracity. Of course individuals can make informed decisions in order to participate in research voluntarily only if they have information on the possible risks and benefits of the research. Free and informed consent needs to incorporate an introduction to the study and its purpose as well as an explanation about the selection of the research subjects and the procedures that will be followed. It is essential to describe any physical harm or discomfort, any invasion of privacy and any threat to dignity as well as how the subjects will be compensated in that case. In addition the subjects need to know any expected benefits either to the subject or to science by gaining new knowledge. A disclosure of alternatives is also required The ethical principle of beneficence refers to the Hippocratic "be of benefit, do not harm". where the principle of beneficence includes the professional mandate to do effective and significant research so as to better serve and promote the welfare of its constituents. Beneficence is sometimes difficult to predict when creating a hypothesis especially in qualitative research. However if the research findings prove that it was not beneficial as it was expected, this can raise immense ethical considerations especially for nurses. Beneficence relates to the benefits of the research, while non-malificence relates to the potential risks of participation. Nonmalificence requires a high level of sensitivity from the researcher about what constitutes "harm" and harm can be physiological, emotional, social and economic in nature.
Respect for Intellectual Property
Honor patents, copyrights, and other forms of intellectual property. Do not use unpublished data, methods, or results without permission. Give credit where credit is due. Give proper acknowledgement or credit for all contributions to research. Never plagiarize.
Privacy, Anonymity and Confidentiality of information
Protect confidential communications, such as papers or grants submitted for publication, personnel records, trade or military secrets, and patient records.Honor and respect the confidentiality of the information gathered from subjects and only use that data for the purpose for which it was collected. The issue of privacy relates to the subjects freedom to determine the time, extent and general circumstances under which private information will be shared with or withheld from others. An invasion of privacy occurs when private information such as attitudes, beliefs, opinions and records are shared with others without the subjects knowledge or consent.
The issue of confidentiality and anonymity is closely connected with the rights of beneficence, respect for the dignity and fidelity. Anonymity is protected when the subject's identity can not be linked with personal responses. If the researcher is not able to promise anonymity he has to address confidentiality, which is the management of private information by the researcher in order to protect the subject's identity. Confidentiality also means that individuals are free to give and withhold as much information as they wish to the person they choose. The researcher is responsible to maintain confidentiality that goes beyond ordinary loyalty and this can sometimes raise ethical dilemmas for the researcher when confidentiality must be broken because of the moral duty to protect society.
Non-Discrimination
Avoid discrimination against colleagues or students on the basis of sex, race, ethnicity, or other factors that are not related to their scientific competence and integrity.
Legality
Know and obey relevant laws and institutional and governmental policies.
Animal Care
Show proper respect and care for animals when using them in research. Do not conduct unnecessary or poorly designed animal experiments.
Human Subjects Protection
When conducting research on human subjects, minimize harms and risks and maximize benefits; respect human dignity, privacy, and autonomy; take special precautions with vulnerable populations; and strive to distribute the benefits and burdens of research fairly.
* (Adapted from Shamoo A and Resnik D. 2009. Responsible Conduct of Research, 2nd ed. (New York: Oxford University Press).