Quality criteria, research ethics, and other research issues

Chapter 3: Quality criteria, research ethics, and other research issues

Book name: Research Methods in Applied Linguistics

Writer: Zoltan Dornyei

Professor: Dr. Zoghi, M

Saeed Mojarradi Ph.D. candidate         Sunday, October 21, 2018

First and foremost come the quality criteria for research, because we can only claim that our investigation is indeed a ‘disciplined’ inquiry if we can set explicit quality standards to achieve.The second topic to cover is research ethics, which is a curiously neglected issue in much applied linguistic research and often surfaces only when researchers realize that they need to produce some ‘ethical clearance’ for their study if they want to submit the results to obtain a postgraduate degree or to be published in certain journals.

3.1 Quality criteria for research

`Validity’ is another word for truth. (Silverman 2005: 2.1o) As we have seen, the basic definition of scientific research is that it is a `disciplined’ inquiry, and therefore one thing research cannot afford is to be haphazard or lacking rigor.The fragmented nature of the domain is well reflected by the fact that there does not even exist any universally accepted terminology to describe quality criteria, and the terms that are more widely known — ‘validity’ and ‘reliability in particular — are subject to ongoing criticism, with various authors regularly offering disparate sets of alternatives.Of course, given the huge importance of research quality, this situation is not surprising: representatives of different research traditions understandably emphasize quality parameters that will allow the type of inquiry they pursue to come out in a good light.The problem is that the scope of possible quality criteria is rather wide — ranging from statistical and methodological issues through real world significance and practical values to the benefits to the research participants — and some parameters that seem to do one research approach justice do not really work well with other approaches,

Many QUAL researchers deny the relevance of ‘validity’ and ‘reliability’ as defined in quantitative terms. In order to introduce quality criteria that are more suitable for QUAL inquiries, several alternative terms have been proposed: validity has been referred to as ‘trustworthiness’, ‘authenticity’, ‘credibility’, ‘rigor’, and `veracity’, but none of these have reached a consensus and the terminology in general is a highly debated topic.There have also been attempts to match QUAL and QUAN terms (for example, external validity = transferability; reliability = dependability) but, surely, the whole rationale for developing a parallel terminology is the conviction that there are no straightforward parallels in the two research paradigms.

3.1.1 Quality criteria in quantitative research

The concept of ‘reliability’ is fairly straightforward, but when we look at ‘validity’ we find two parallel systems in the quantitative literature—one centered on ‘construct validity’ and its components, the other around the ‘internal/external validity’ dichotomy.Research validity concerns the whole research process, and following Camp_ bell and Stanley (1963) focuses on the distinction of ‘internal validity’, which addresses the soundness of the research (i.e. whether the outcome is indeed a function of the various variables and treatment factors measured),Measurement validity refers to the meaningfulness and appropriateness of the interpretation of the various test scores or other assessment procedure outcomes.Validity here is seen as a unitary concept, expressed in terms of ‘construct validity’, which can be further broken down to various facets such as content or criterion validity.It is this perspective (going back to Lado’s work on testing) that produced the classic tenet that a test is valid if it measures what it is supposed to measure, even though the current view is that it is neither the instrument, nor the actual score that is valid but rather the interpretation of the score with regard to a specific population. Thus, the discussion of quantitative quality. Standards is best divided into three parts:(a)  Reliability (b) measurement validity, and (c) research validity.


The term reliability comes from measurement theory and refers to the ‘consistencies of data, scores or observations obtained using elicitation instruments, which can include a range of tools from standardized tests administered in educational settings to tasks completed by participants in a research study’ (Chalhoub-Deville 2006: z).In other words, reliability indicates the extent to which our measurement instruments and procedures produce consistent results in a given population in different circumstances. The variation of the circumstances can involve differences in administrative procedures, changes in test takers over time, differences in various forms of the test and differences in raters (Bachman zoo4b).If these variations cause inconsistencies, or measurement error, then our results are unreliable. It is important to remember that, contrary to much of the usage in the methodological literature, it is not the test or the measuring instrument that particular population of test takers (Wilkinson and TFSI, 1999) is reliable or unreliable.

Bachman (zoo4b) offers a detailed description of two general approaches whereby classic test theory provides estimates of reliability: (1) we can calculate the correlation between two sets of scores, for example between two halves of a test or two parallel forms, or two different raters’ ratings. (2) We can calculate a statistic known as Cronbach alpha (see Section 9.3), which is based on the variances of two or more scores and serves as an ‘internal consistency coefficient’ indicating how the different scores ‘hang together’ (for example, more than two raters’ scores or several parallel questionnaire items in a scale).

Measurement validity

The concept of validity from a measurement perspective has traditionally been summarized by the simple phrase: a test is valid if it measures what it is supposed to measure. However, the scientific conceptualization of measurement validity has gone through some significant changes over the past decades.According to Chapelle’s (1999) description of the development of the concept in applied linguistics, validity was seen in the 196os as a characteristic of a language test. Several types of validity were distinguished: ‘criterion validity’ was defined by the test’s correlation with another, similar instrument; ‘content validity’ concerned expert judgement about test content; and ‘construct validity’ showed how the test results con-formed to a theory of which the target construct was a part.

This traditional conceptualization is still pervasive today. In 1985, the main international guidelines for educational and psychological measurement sponsored by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education—the AERA/APA/NCME Standards for Educational and Psychological Testing (AERA, APA, and NCME ‘999) — replaced the former definition of three validities with a unitary concept, ‘construct validity’. This change was a natural consequence of the shift from seeing validity as an attribute of the test to considering it the truthfulness of the interpretation of the test scores. Lynch (2003: 149) summarizes the new conception clearly: `When examining the validity of assessment, it is important to remember that validity is a property of the conclusions, interpretations or inferences that we draw from the assessment instruments and procedures, not the instruments and procedures themselves’. Following the new approach, content- and criterion-related evidence came to be seen as contributors to the overall validity construct along with validity considerations of the consequences of score interpretation and use, and even reliability was seen as one type evidence.

 To conclude this brief overview of measurement validity, here is a list of four key points provided by Bachman (zoo4b) based on Linn and Gronlund’s work:

  • Validity is a quality of the interpretations and not of the test or the test scores.
  • Perfect validity can never be proven— the best we can do is provide evidence that our validity argument is more plausible than other potential competing interpretations.
  • Validity is specific to a particular situation and is not automatically transferable to others.
  • Validity is a unitary concept that can be supported with many different types of evidence.

Research validity

The second type of validity, ‘research validity’, is broader than measurement validity as it concerns the overall quality of the whole research project and more specifically (a) the meaningfulness of the interpretations that researchers make on the basis of their observations, and (b) the extent to which these interpretations generalize beyond the research study (Bachman 2.0 04a).These two validity aspects were referred to as internal validity and external validity by Campbell and Stanley (1963) over four decades ago, and although have been attempts to fine-tune this typology (for example, by Cook and p Cam bell 1979), the broadly dichotomous system has stood the test of time:A research study or experiment has internal validity if the outcome is a function of the variables that are measured, controlled in the study.External validity is the extent to which we can generalize our findings to a study externally larger group, to other contexts or to different times A study is externally invalid if the results apply only to the unique sample or setting in which they were found.

The main threat to external validity in experimental studies involves any special interaction between our intervention/treatment and some characteristics of the particular group of participants which causes the experiment to work only in our study (for example, the experiment exploits a special feature of the treatment group that is usually not existent in other groups).

Main threats to research validity

Let us have a look at the six most salient validity threats.

  • Participant mortality or attrition

In studies where we collect different sets of data participants (for example, pre- and post-test or multiple tests and questionnaires), subject dropout is always concern

  • The Hawthorne effect

The term comes from the name of a research site (in Chicago where this effect was first documented when researchers investigating an electric company found that work production increased when they were present, regardless of the conditions the workers were subjected to.

The reason for such an irrational effect is that participants perform differently when they know they are being studied. Mellow et al. (1996: 3 34) found that this threat is particularly salient in applied linguistic research as it may be ‘the single most serious threat to studies of spontaneous language use’.

  • Practice effect

If a study involves repeated testing or repeated tasks (for example, in an experimental or longitudinal study), the participants’ performance may improve simply because they are gaining experience in taking the particular test or performing the assessed activity.

  • Participant desire to meet expectation (social desirability bias) The participants of a study are often provided with cues to the anticipated results of the project, and as a result they may begin to exhibit performance that they believe is expected of them. A variation of this threat is when participants try to meet social expectations and over-report desirable attitudes and behaviors while underreporting those that are socially not respected.
  • History Empirical research does not take place in a vacuum and therefore we might be subject to the effects of unanticipated events while the study is in progress. (See, for example, Section 8.4 on the challenges of classroom research.) Such events are outside the research study, yet they can alter document participants’ performance. The best we can do at times like this is to document the impact of the events so that later we may neutralize it by using some kind of statistical control.

Three basic quality concerns in qualitative research

Quantitative researchers sometimes criticize the qualitative paradigm for not following the principles of the ‘scientific method’ (for example, the objective and formal testing of hypotheses) or having too small sample sizes, but these concerns miss the point as they, in effect, say that the problem with qualitative research is that it is not quantitative enough. There are, however, certain basic quality concerns in QUAL inquiries that are independent of paradigmatic considerations.

I have found three such issues particularly salient:

1-Insipid data Focusing on ‘individual meaning’ does not offer any procedures for deciding whether the particular meaning is interesting enough (since we are not to judge a respondent’s personal perceptions and interpretations), and if it is not sufficiently interesting then no matter how truthfully we reflect this meaning in the analysis we will obtain only low quality results that are ‘quite stereotypical and close to common sense’ (Seale et al. 1°04: 2). In other words, the quality of the analysis is dependent on the quality of the original data and I am not sure whether it is possible to develop explicit guidelines for judging one set of complex idiosyncratic meaning as better than another. As Sea-le et al. conclude, past practice in qualitative research has not always been convincing in this respect, and although taking theoretical sampling seriously does help (see Section 6.2), no qualitative sampling procedure can completely prevent the documentation of the unexciting.

2- Quality of the researcher Morse and Richards (2002) are right when they warn us that any study is only as good as the researcher, and in a qualitative the researcher is study this issue is particularly prominent because in a way  the instrument—see Section 2.1.4. This raises a serious question: how can quality criteria address the researcher’s skills that are to a large extent responsible for ensuring the quality and scope of the data and the interpretation of the results? Again, quantitative research does not have to face this issue because a great deal of the researcher’s role is guided by standardized procedures.

3 Anecdotalism and the lack of quality safeguards the final quality concern has been described by Silverman (2005: 21 I) as follows: qualitative researchers, with their in-depth access to single cases, have to overcome a special temptation. How are they to convince themselves (and their audience) that their ‘findings’ are genuinely based on critical investigation of all their data and do not depend on a few well-chosen `examples’?

This is sometimes known as the problem of anecdotalism. Indeed, space limitations usually do not allow qualitative researchers to provide more than a few exemplary instances of the data that has led them to their conclusion, a problem that is aggravated by the fact that scholars rarely provide any justification for selecting the specific sample extracts…

The concept of consistency is also emphasized in Kirk and Miller’s (1986: 69) definition of reliability in field work as the degree to which ‘an ethnographer would expect to obtain the finding if he or she tried again in the same way’, that is, the degree to which ‘the finding is independent of accidental circumstances of the research’ (p. 2o). Morse and Richards’ (2.00z: i68) definition sums up the consistency issue well but at the same time reveals why qualitative reliability has been played down in the past: ‘reliability requires that the same results would be obtained if the study were replicated’.

The problem is that replication is not something that is easy to achieve in a research paradigm where any conclusion is in the end jointly shaped by the respondents’ personal accounts and the researcher’s subjective interpretation of these stories. Having said that, it is possible to conduct reliability checks of various sub-processes within a qualitative inquiry, for example of the coding of interview transcripts by asking a second coder to code separately a sizable part of the transcript (either using the researcher’s coding template or generating the codes him/herself) and then reviewing the proportion of agreements and disagreements.

Lincoln and Guba’s taxonomy of quality criteria

In a spirited denial of the allegations that qualitative research is ‘sloppy’ and qualitative researchers respond indiscriminately to the ‘louder bangs or brightest lights’, Lincoln and Guba (1985) introduced the concept of ‘trust-worthiness’ as qualitative researchers’ answer to ‘validity’. They proposed four components to make up trustworthiness: A Credibility, or the ‘truth value’ of a study, which is the qualitative counter-part of ‘internal validity’. B Transferability, or the ‘applicability’ of the results to other contexts, which is the qualitative counterpart of ‘external validity’. C Dependability, or the ‘consistency’ of the findings, which is the qualitative counterpart of ‘reliability’. D Confirmability, or the neutrality of the findings, which is the qualitative counterpart of ‘objectivity’. These terms are sometimes referred to as ‘parallel criteria’ because of their corresponding quantitative counterparts.

Maxwell’s taxonomy of validity in qualitative research

1-    Descriptive validity concerns the factual accuracy of the researcher’s account. Maxwell (1992) regards this as the primary aspect of validity i because all the other validity categories are dependent on it. It refers to what the researcher him/herself has experienced and also to ‘secondary’ accounts of things that could in principle have been observed, but that were inferred from other data. One useful strategy for ensuring this validity is `investigator triangulation’, that is, using multiple investigators to collect and interpret the data.

2-     Interpretive validity Descriptive validity was called a primary validity dimension because it underlies all other validity aspects and not because Maxwell (1992) considered descriptiveness the main concern for qualitative research. Instead, he argued, good qualitative research focuses on what the various tangible events, behaviors or objects ‘mean’ to the participants. Interpretive validity, then, focuses on the quality of the portrayal of this participant perspective. An obvious strategy to ensure this validity is to obtain participant feedback or member checking, which involve discussing the findings with the participants. (For more details, see below.)

3-    Theoretical validity corresponds to some extent to the internal validity of the research as it concerns whether the researcher’s account includes an appropriate level of theoretical abstraction and how p well this theory explains or describes the phenomenon in question.

4-    Generalizability It is interesting that in labelling this category (1992) did not use a phrase containing the word validity, even though or validity’ Maxwell there would have been an obvious term to use: external validity.This is because he further divided `generalizability’ into ‘internal generalizability’ and ‘external generalizability’ and this division would not have worked with the term ‘external validity’ (i.e. we cannot really have ‘internal external validity’).

Duff (2006) points out that many QUAL researchers view the term generalizability suspiciously because it is reminiscent of QUAN methodology, in which the capacity to generalize from the sample to some wider population is one of the key concerns. This is where Maxwell’s (1992) distinction between internal and external generalizability is enlightening: he agrees that generalizability plays a different role in QUAL research than it does in QUAN research and therefore internal generalizability is far more important for most qualitative researchers than is external generalizability.

5-    Evaluative validity refers to the assessment of how the researcher evaluates the phenomenon studied (for example, in terms of usefulness, practicability, desirability), that is, how accurately the research account assigns value judgments to the phenomenon. Thus, this validity aspect concerns the implicit or explicit use of an evaluation framework (for example, ethical or moral judgements) in a qualitative account, examining how the evaluative claims fit the observed phenomenon.

Evaluative validity is gaining importance nowadays with various ‘critical’ theories becoming increasingly prominent in the social sciences and also in applied linguistics. Strategies to ensure validity in qualitative research throughout this chapter I have been uncertain about how much space to devote to describing the various systems of validity and reliability because these typologies are admittedly not too practical in themselves. However, I agree with Maxwell’s (1992) argument that such typologies offer a useful framework for thinking about the nature of the threats to validity and the possible ways that specific threats might be addressed. In the following, I list the most common strategies used to eliminate or control validity threats and to generate trustworthiness.

Building up an image of researcher integrity

It is my conviction that the most important strategy to ensure the trustworthiness of a project is to create in the audience an image of the researcher as a scholar with principled standards and integrity. At the end of the day, readers will decide whether they have confidence in one’s research not by taking stock of the various validity arguments but by forming an opinion about the investigator’s overall research integrity.

There are certain strategies that are particularly helpful in showing up the researcher’s high standards (provided, of course, those exist):

  • Leaving an audit trail

By offering a detailed and reflective account of the steps taken to achieve the results—including the iterative moves in data collection and analysis, the development of the coding frames and the emergence of the main themes can generate reader confidence in the principled, well-grounded and thorough nature of the research process.

As Holliday ( 2.0o 73 2) summarizes: As in all research, a major area of accountability must be procedure, and when the choices are more-open, the procedure must be more transparent so that the scrutinizers of the research can assess the appropriateness of the researcher ‘s choices.

  • Contextualization and thick description

Presenting the findings in rich contextualized detail helps the reader to identify with the project and thus come on board.

  • Identifying potential researcher bias

Given the important role of the researcher in every stage of a qualitative study, identifying the researcher’s own biases is obviously an important issue in an inquiry, and it also creates an open and honest narrative that will resonate well with the audience (Creswell 2003).

  • Examining outliers, extreme or negative cases and alternative explanations

No research study is perfect and the readers know this. Therefore, explicitly pointing out and discussing aspects of the study that run counter to the final conclusion is usually not seen as a weakness but adds to the credibility of the researcher. Similarly, giving alternative explanations a fair hearing before we dismiss them also helps to build confidence in our results.

  • Respondent feedback (or ‘respondent validation’ or ‘member checking’) Because of the emphasis placed in qualitative research on uncovering participant meaning, it is an obvious strategy to involve the participants themselves in commenting on the conclusions of the study.
  • Peer checking

Qualitative studies often include reliability checks performed by peers. They always involve asking a colleague to perform some aspect of the researcher’s role — usually developing or testing some coding scheme, but they can also involve performing other activities such as carrying out an observation task — and then comparing the correspondence between the two sets of outcomes. This is a very useful strategy because even low correspondence can serve as useful feedback for the further course of the study, but unfortunately it is often difficult to find someone who is both competent and ready to engage in this time-consuming activity.

Research design-based strategies

Strategies concerning a study’s research design can provide the most convincing evidence about the validity of the research as they are an organic part of the project rather than being ‘add-ons’. In a way, these practices are not necessarily ‘strategic actions’ but simply examples of good research practice.

  • Method and data triangulation the concept of ‘triangulation’ involves using multiple methods, sources or perspectives in a research project. Triangulation has been traditionally seen as one of the most efficient ways of reducing the chance of systematic bias in a qualitative study because if we come to the same conclusion about a phenomenon using a different data collection/analysis method or a different participant sample, the convergence offers strong validity evidence. However, it leaves the same question open as the one already raised with regard to participant feedback: how shall we interpret any emerging dis-agreement between the corresponding results?
  • Prolonged engagement and persistent observation

Research designs that emphasize the quantity of engagement with the target community/phenomenon carry more face validity: most people would, for example, treat an account by an ethnographer who has spent 15 years studying a community, account, even though this may not necessarily be so (for example tive’). if the observer has ‘gone native

  • Longitudinal research design Duff (2.006) argues that longitudinal studies have the potential to increase the validity of the inferences that can be drawn from them because they can reveal various developmental pathways and can also document different types of interactions over time. (Longitudinal studies are described in Chapter 4.)

3.1.3 Quality criteria in mixed methods research We have seen in this chapter that qualitative and quantitative research are associated with somewhat different quality criteria, and even though broad quality dimensions might be described using the uniform dichotomy of `reliability’ and ‘validity’, the specific details in how to operationalize these concepts show variance.

So, what shall we do in mixed methods research? How can we satisfy both QUAL and QUAN quality criteria in a single study? This is an important question because mixed methods research is a relatively new approach and therefore researchers adopting mixed designs should be particularly careful to defend the methods they are employing.

I suggest that we consider three specific aspects of the quality of mixed methods research separately:

(a) The rationale for mixing methods in general;

(b) The rationale for the specific mixed design applied, including the choice of the particular methods; and

(c) The quality of the specific methods making up the study.

The rationale for mixing methods

Mixed methods research as an explicitly marked rese its infancy and therefore scholars an arch approach is still in applying the paradigm need to justify their choice clearly. (To a lesser degree, qualitative researchers also face this task, especially in content areas and research environments where there is still a dominance of quantitative methodology.)

The ‘design validity’ of the study

Although I have argued above against introducing new nomenclature for quality criteria, the term `design validity’ (Teddlie and Tashakkori 2.003) appears to be relevant and necessary because it concerns a new aspect of internal validity, specific to mixed methods research.

It refers to the extent to which the QUAL and QUAN components of a mixed methods study are combined or integrated in a way that the overall design displays complementary strengths and no overlapping weaknesses of the constituent methods. (See also Brewer and Hunter 1989; Tashakkori and Teddlie 1998.)

The quality of the specific methods

The specific methods that are combined in mixed methods research are — obviously— either qualitative or quantitative, and therefore the quality principles described earlier in this chapter apply to them. Several mixed methods designs display a dominant method (see Chapter 7), and in such cases most of the evidence included in the validity argument will need to be in accordance with the quality standards of the particular paradigm.

3.2 Research ethics

Any qualitative researcher who is not asleep ponders moral and ethical questions. (Miles and Huberman 1994: 288) Social research— including research in education—concerns people’s lives in the social world and therefore it inevitably involves ethical issues.

Hesse-Biber and Leavy (2006) observe that ethical discussions often remain detached or marginalized from discussions of the ‘real research project, almost as an afterthought. While this might be due to the fact that research publications are usually written for researcher audiences and it is assumed that colleagues are aware of the issues involved, in a research methods text we need to address the various ethical dilemmas that we face when we engage in real world research.

3.2.1 Some key ethical dilemmas and issues

The first ethical dilemma to address is how seriously we should take the various ethical issues in educational contexts. Most research methodology books reiterate broad recommendations of ethical conduct that have been prepared for the social sciences (including psychology) or for medical research, and in these broad contexts there is indeed a danger that misguided or exploitive research can cause real harm. On the other hand, some researchers such as Johnson and Christensen (2004 🙂 emphasize that educational research should not be treated similarly to other social disciplines: Fortunately, studies conducted by educational researchers seldom if ever run the risk of inflicting such severe mental and physical harm on participants. In fact, educational research has historically engaged in research that imposes either minimal or no risk to the participants and has enjoyed a special status with respect to formal ethical oversight. Christensen (2004) point out, certain research practices, especially qualitative ones, include elements that ‘muddy the ethical waters’ (p. III) and warrant careful consideration. Some examples of such sensitive aspects of research are:

  • The amount of shared information

As we will see below in Section 3.2.6, one of the basic dilemmas of researchers is to decide how much information should be shared with the participants about the research so as not to cause any response bias or even non-participation. In Cohen et al.’s (i000: 63) words: ‘What is the proper balance between the interests of science and the thoughtful, humane treatment of people who, innocently, provide the data?’

  • Relationships Qualitative studies can often result in an intimate relation-ship between researchers and participants, with the former trying to establish rapport and empathy to gain access to the participants’ lives and stories. This relational character of qualitative research raises general ethical questions about the limits of closeness and intimacy with the respondents —Ryen (2004) for example discusses the controversial issue of flirting with (adult) participants—and there is a concrete dilemma about how to end a research project without leaving the participants feeling that they were merely used.
  • Data collection methods

Certain methods that remove the participants from their normal activities and involve, for example, one-to-one contact, can fall under the confines of child protection legislation in several countries.

  • Anonymity

A basic dilemma in educational research concerns the fact that although ideally our participants should remain anonymous, we often need to identify the respondents to be able to match their performances on various instruments or tasks.

  • Handling the collected data the video camera in particular is a very public eye and it allows the identification of the participants long after the research has ended. Audio recordings can similarly be a threat to anonymity.
  • Ownership of the data who ‘owns’ the collected data? Does the researcher, asks Richards (2003), have complete control in editing or releasing information (observing, of course, privacy constraints)?
  • Sensitive information in deep interviews participants can reveal sensitive information that is not related to the goal of the study (for example, point_ in g to abuse or criminal activity). How do we react? We need to
  • Testing although this book does not cover testing, we need to note that the misuse of test scores carries real dangers.

3.2.2 Legal context

In many countries, observing ethical principles is enforced by legal and institutional requirements. In the US, for example, researchers have to submit a detailed research plan for approval to an Institutional Review Board prior to starting their investigations in order to comply with federal regulations that provide protection against human rights violations.

3.2.3 Researcher integrity

Curiously, few research methods texts elaborate on the area of research ethics that is, in my view, central to any Investigation: the researcher’s integrity

Standards of the American Educational Research Association (AERA 2.002.), which starts out with a set of ‘guiding standards’ describing the researchers’ general responsibilities to the field. These include the following points:

  • Educational researchers must not fabricate, falsify, or misrepresent author-ship, evidence, data, findings, or conclusions.
  • Educational researchers must not knowingly or negligently use their professional roles for fraudulent purposes.
  • Educational researchers should attempt to report their findings to all relevant stakeholders, and should refrain from keeping secret or selectively communicating their findings. There are many grey areas in research where not even the most detailed regulations can prevent some undesirable shortcuts or manipulation. Although the general tenor of this book is meant to be highly practical and pragmatic, this is an area where we must firmly draw the line.

3.2.4 Protection from harm and achieving an equitable cost-benefit balance

The primary principle of research ethics is that no mental or physical harm should come to the respondents as a result of their participation in the investigation. This principle overrides all other considerations.

We should never forget that by spending time and energy helping us they are doing us a favor and it is our responsibility to try to make the cost-benefit balance as equitable as possible. Unfortunately, it is all too common to see a ‘slash and burn’ research strategy whereby investigators use their participants without offering anything in return and as soon as the data has been gathered they disappear. In some cases saying a warm and salient.

3.2.5 Privacy, confidentiality, anonymity, and data storage

It is a basic ethical principle that the respondent’s right to privacy should always be respected and that respondents are within their rights to refuse to answer questions or to withdraw from the study completely without offering any explanation. It is also the participants’ right to remain anonymous, and if the participant’s identity is known to the research group, it is the researcher’s moral and professional (and in some contexts legal) obligation to maintain the level of confidentiality that was promised at the onset.

These straightforward principles have a number of implications:

  • We must make sure that we do not promise a higher degree of confidentiality than what we can achieve, and that the guarantees of confidentiality are carried out fully.
  • The right to confidentiality should always be respected when no clear understanding to the contrary has been reached.
  • We must make sure—especially with recorded/transcribed data—respondents that they are not traceable or identifiable.

Informed consent and the issue of deception

The most salient and most often discussed aspect of research ethics is the issue of informed consent. (For relevant guidelines developed for applied linguistics, see the ‘Information for Contributors’ section of the journal TESOL Quarterly.) If there are regulations governing ethical practice in a country or institution, obtaining written consent from the participants is likely to be the first item on the agenda.

In the US, for example, federal regulations not only require written consent from the participants but also require informed consent before a researcher can use an individual’s existing records for research purposes (Johnson and Christensen 2.004).

There is quite a bit of controversy about

(a) how ‘informed’ the consent should be—that is, how much information do we need to share with the respondents before asking them to participate; and

(b) In what form should the consent be given. These are important questions to consider prior to the research because decisions about the styles and means of obtaining informed consent have an impact on the type of people who are likely to agree to participate (Wiles et al. zoos). How ‘informed’ should the consent be? Of course, when we ask this question what we really mean is ‘how little information is enough to share in order to remain ethical?’ The reluctance to reveal too much about the nature of our research is a pragmatic one because certain information can influence or bias the participants’ responses and may even make them withdraw from the study.

  • As much as possible about the aims of the investigation and the purpose for which he data will be used.
  • The tasks the participants will be expected to perform during the study.
  • The possible risks and the potential consequences of participating in the research.
  • The extent to which answers will be held confidential.
  • The basic right of the participants to withdraw from the study at any point.


An ethnographic study of a criminal subculture would be a good example but even in far less extreme cases certain type of information may influence the results. Accordingly, although the AERA (2002) guidelines emphasize the significance of honesty, they do not forbid any engagement in deception if it is absolutely necessary, provided it is minimized and after for the deception.

In the study the participants are briefed about the reasons addition, it goes without saying that the deception must not adversely affect the welfare of the participants. While most researchers would accept these general principles, there is considerable debate in the social science literature about the acceptable level of actual deceit.

Forms of consent and the consent forms

There are two basic forms of consent, active and passive. ‘Active’ consent involves consenting to participate in a research study by signing a consent form, whereas ‘passive’ consent involves not opting out or not objecting to the study. The most common example of the latter is to send a consent form to the parents of school children and ask them to return it only if they do not want their child to participate in the research.

It is clear that obtaining active consent is more explicit in ensuring that the participants know their rights and it also protects the researcher from any later accusations, but in certain types of educational research it may not be necessary or beneficial: a request for consent in such a formalized manner can be off-putting or can raise undue suspicion that something is not quite right about the study, thereby discouraging people from participating. In addition, signed consent ism meaningless for some cultural or age groups. Therefore, while I would ask for active consent in a qualitative interview study, I would consider omitting it and obtaining only passive consent—if this does not contradict any legal regulations in the particular context—in an anonymous questionnaire survey.

A written consent form usually contains the following details (Cohen ET 2000; Creswell 2003; Johnson and Christensen 2.00 al. 4):

  •  fair explanation of the purpose of the research a followed. And the procedures to be • A description of any risks or discomforts as well as benefits the participant may encounter/receive.
  • A statement of the extent to which the results will be kept confidential.
  • A statement indicating that participation is voluntary and the participant can withdraw and refuse to participate at any time with no penalty.
  • An offer to answer any questions concerning the procedures and (option-ally) to receive a copy of the results.
  • Signatures of both the participant and the researcher, agreeing to these Provisions.

Additional consent from teachers and parents

 Many, if not most, educational studies are conducted within schools or other educational institutes and therefore the researcher may need to obtain additional consent from various authority figures such as teachers, head teachers or principals. In some countries consent must be sought even from the local education authorities. Additional consent must also be obtained when the research is targeting minors who cannot be regarded as fully mature or being on equal terms with the researcher (Cohen et al. z000). They may not be in a position to represent themselves appropriately, which means that someone else needs to be consulted.

The question here is to decide who has sufficient authority in such cases: the legal guardian (for example, parent), the children’s teacher(s) or both. In this respect the existing legal and ethical research frameworks differ greatly across countries. It is my view that unless there exist legal requirements stating otherwise and if the research is neither aimed at sensitive information nor involves extensive participant engagement (for example, a relatively neutral anonymous questionnaire), permission to conduct the study can be granted by the children’s teachers. Teachers are usually aware of the significance of legal matters and therefore if they have any doubts about who should authorize the project, they will seek advice. And even if parental permission for the research is needed, I would favor passive consent whereby parents are advised about the proposed research and parental permission will be assumed unless the parents object before the proposed starting date.

3.2.7 Concluding remarks on research ethics

My personal conclusion about research ethics is that it should be taken more seriously than many researchers do and at the same time it should be taken less seriously than some legislators do. Possibly because applied linguistic research does not usually pose any obvious threat to the participants, researchers in our field often ignore the significance of ethical issues and research ethics are only mentioned within the context of legal requirements to comply with. The problem with this practice is that it does not provide a firm ethical framework that we can rely on when an ethically difficult situation arises.

3.3 Research questions and hypotheses … many scientists owe their renown less to their ability to solve problems than to their capacity to select insightful questions or investigation … f  (Shavelson and Towne 2002: 55)

Most research texts suggest that the proper way to do research involves first generating one or more research questions and then choosing the design, the method, and the instruments that allow the researcher to find answers to these questions.

For example, in discussing the principles of scientific research in education, Shavelson and Towne (2002: 99) explicitly state that ‘we have to be true to our admonition that the research question drives the design, not vice versa’. While this is a logical position, many novice researchers will attest to the fact that in reality the research process is not so straightforward and the `research-question-first’ principle leaves several questions open.

For example, in an investigation that is primarily exploratory, how can we start out by producing a specific research question when we know relatively little of the topic, which is exactly why we want to research into it? How is the research question related to other frequently mentioned terms such as ‘research topic’ or ‘research hypothesis’? What about qualitative research in which the exact theme and angle of the investigation often emerge only during the project?

3.3. Paradigmatic differences in formulating the research questions

One characteristic feature of qualitative studies is their emergent nature and therefore QUAL research purposes and questions are often inevitably vaguer than their QUAN counterparts. Instead of describing a specific issue or problem, the research purpose often contains only the specification of a situated phenomenon or a central idea that will be explored with the aim of developing new insights and possibly forming a theory in the end. In accordance with this, qualitative research questions tend to be broader than quantitative ones, often focusing on the big picture or the main processes that are thought to shape the target phenomenon— usually it is not possible to be more specific at this stage without limiting the inquiry and, therefore, investigators emphasize the exploratory nature of the study instead.

Other essentials for launching a study: pilot study, research log, and data management

To conclude this chapter, let me address three essential issues to be considered before launching an investigation:

(a) Piloting the instruments and the various research procedures;

(b) Keeping a precise research log; and

(c) Managing and storing data professionally.

3.4.1 Piloting the research

The gist of this section can be summarized in a short sentence: always pilot your research instruments and procedures before launching your project. Just like theatre performances, a research study also needs a dress rehearsal to ensure the high quality (in terms of reliability and validity) of the outcomes in the specific context. And while this tenet is universally endorsed by research methodologists, my personal experience is that all too often piloting either does not happen or is superficial. There are at least three reasons for this omission: first, researchers may simply not realize the significance of the piloting phase. Second, investigators at the preparation stage are usually eager to get down to data collection and see some results. Third, the piloting phase may not have been scheduled in the timeframe of the project because the research-ers did not realize that when properly done, this phase can take several weeks to complete. Piloting is more important in quantitative studies than in qualitative ones, because quantitative studies rely on the psychometric properties of the research instruments. In a questionnaire survey, for example, if we have not covered a variable by sufficient questionnaire items then this variable simply cannot emerge in the results, no matter how important it may be in the particular context.

3.4.2 Research log

 The significance of good strategies to manage and store the data is twofold, concerning both logistic and ethical issues. First, it is a challenging task to devise a storing and labelling system which will stop us from losing or mixing up data and which will provide an efficient way of locating and accessing information. Researchers need to make a detailed list of all the data sources in a research log (see previous section), organized according to key parameters (for example, type of data place of data-gathering) and then need to keep these records up-to-date. Audiotapes, video cassettes and any illustrative documents need to be properly labelled and stored in a secure place, and we also need a transparent system for keeping relevant computer files. Particularly in qualitative research we can quickly gather masses of rather `unorderly’ data ranging from field notes to documents and of course transcripts.

S Mojarradi
S Mojarradi
Studying Ph.D. in ELT | Listening To Lyric Music | Studying Novels | Retired | Loving Nature | People | Especially Paintings |

Leave a Reply

Your email address will not be published. Required fields are marked *