Cross-cultural Methodologies for Organizational Research
Using Self-Report Measures: A Best Practices Approach
Brian S. Schaffer
Christine M. Riordan
University of Georgia
Department of Management
Terry College of Business
Athens, GA 30602-6256
There are a number of methodological issues that can be problematic in
cross-cultural studies that use self-report survey instruments. This paper reviews the
organizational research literature to identify the common practices being used in relation
to these issues. A framework is established for this analysis that involves three stages
related to the research process. These stages are 1) the development of the research
question, 2) the alignment of the research contexts, and 3) the validation of the research
instruments. A sample of cross-cultural studies was examined in the context of these three
stages, and served as a basis for the identification of some best-practices
that are meant to deal with cross-cultural complexities.
perspectives are becoming more prevalent in todays study of organizations. As
business continues to take a global outlook, theoretical constructs commonly used in
domestic research will need to be applied to new cross-cultural arenas. Recently,
researchers have begun to take notice of some important methodological issues associated
with the use of survey instruments in cross-cultural research (e.g., Cheung &
Rensvold, 1999; Riordan & Vandenberg, 1994). These issues can have a strong impact on
a studys results, and on the subsequent interpretation of those results. If
researchers ignore the difficulties inherent in using self-report questionnaires in
cross-cultural studies, the field as a whole may be subject to misinterpreting some
findings that may actually be meaningless, inconclusive, or misguiding.
The purposes of this
article are threefold. First, we provide a review of these important methodological issues
in cross-cultural research. It should be noted that many of these issues have previously
been identified by other researchers as being threats to validity in a variety of
field-research settings (e.g., Cook & Campbell, 1979; Campbell & Stanley, 1963).
Our goal here is to relate these threats specifically to cross-cultural settings. Second,
we review the most common practices currently being used in the administration of
self-report instruments in cross-cultural research. From a sample of cross-cultural
studies (see Appendix for articles included in this sample), we identified practices that
were found to be most typical, and we present some problems associated with these
practices. And third, we identify some best practices for administering these
We developed a
framework, consisting of three stages, for comparing and evaluating the various
methodologies. These stages are 1) the development of the research question, 2) the
alignment of the research contexts, and 3) the validation of the research instruments. Our
comparisons and evaluations of different cross-cultural studies, in the context of these
three stages, served as a basis for the identification of
Stage 1: Development of Cross-Cultural Research
In this stage,
researchers must decide whether their study will be approached from an etic or an emic
perspective, and they must establish the way in which they will define or consider culture in the context of their research. The emic
approach, as it applies to cross-cultural research, focuses on studying a construct from
within a specific culture, and understanding that construct as the people from within that
culture understand it (Gudykunst, 1997). The etic approach, on the other hand, involves
developing an understanding of a construct by comparing it across cultures using
predetermined characteristics. Researchers have recognized the importance of both of these
From a measurement
standpoint, criteria in an emic approach are relative to the characteristics of the
particular culture being studied, and so differences or variability in one setting may not
have the same significance as they would in another setting. The etic approach is more
suited for broader analyses, usually involving two or more cultures. The main assumption
in etic research is that there is a shared frame of reference across culturally diverse
samples, and that construct measurement can be applied to all of the samples in the same
way, ultimately allowing for more generalizability (Ronen & Shenkar, 1988). Since
cross-cultural organizational research often involves comparative studies between two or
more cultures, much of the research is conducted with an etic perspective. From a
measurement standpoint, criteria in an etic approach are considered absolute or universal,
with less attention being given to the internal characteristics of a particular culture
(Berry, 1989). The use of an etic approach may be the most practical for organizational
researchers, in terms of financial limitations and time pressures. However, if etic
constructs are used to make cross-cultural comparisons, researchers risk not capturing all
of the culture-specific (emic) aspects of the construct relative to a particular culture
in the study. On the other hand, if an emic strategy is used, a more precise and thorough
description of the construct within one culture is obtained, but the ability to make
cross-cultural comparisons becomes more difficult (Church & Katibak, 1988).
When researchers fail
to consider the emic aspects of the different cultures involved in their studies, and when
they assume that the concepts being tested exist across all cultures, they are applying
imposed etics, or pseudo etics (Barry, 1990). This problem has been recognized as being
fairly common in cross-cultural research (see Ongel & Smith, 1994), and in our review
we found a similar pattern. A best-practice suggestion for dealing with this problem is to
use a combined emic-etic approach, or a derived etic approach. Rather than identifying
emic dimensions from one culture and simply applying those dimensions to the other
culture(s) in the study, a derived etic approach requires researchers to first attain emic
knowledge (usually through observation, and/or participation) about all of the cultures in
the study (Berry, 1990; Cheung, Conger, Hau, Lew, & Lau, 1992). This allows them to
put aside their culture biases, and to become familiar with the relevant cultural
differences in each setting. When this is done, it may then be possible to make
cross-cultural links between the emic aspects of each culture. While some emic dimensions
will emerge in all cultures, some dimensions may emerge in only one of the cultures
(Cheung, et. al, 1992). Only where there are observed commonalities can cross-cultural
comparisons be appropriately made. The comparisons here are considered derived etics since
they are derived by first doing emic research in each of the cultures, and not just one
In the first stage of
developing the research question, researchers must also determine how the term culture
will be operationalized in their study. In many of the studies we examined,
country or nation was used as a proxy for culture (e.g., Kim,
Park, & Suzuki, 1990). While country may in fact be a suitable and convenient
indicator of culture, using it as the sole operationalization of culture has limitations.
For example, in a cross-cultural study, there may be certain within-country differences
that are actually greater than between-country differences along certain dimensions
(Samiee & Jeong, 1994). While some countries like Japan have a relatively homogeneous
culture, other countries like Canada and Switzerland may have more distinct sub-cultures
within their borders (Peterson & Smith, 1997). Recognizing that there may be other
delimiters of culture besides country, Peterson and Smith (1997) provided a comprehensive
list of cultural determinants meant to assist researchers with this issue. These
determinants include language, religion, technology, industry, national boundaries, and
climate. Adding these characteristics to country as possible delimiters of
culture can enhance the integrity of cross-cultural research. A best practice in this
area, then, would involve researchers establishing which characteristics are relevant to
their specific research context, and then using them to more accurately assess cultural
Stage 2: Alignment of Research Contexts
alignment of research contexts refers to actions that researchers can take to assure
congruence between the different cultures being studied. In our review, two main issues
seemed to be particularly relevant to this stage. First, researchers should establish
equivalency among the samples in their cross-cultural studies, and second, they should
maintain consistency in the survey administration procedures used across different
researchers sometimes have limited access to organizations, it is often difficult for them
to establish the equivalency of samples. Many times, the choice of samples is arbitrary
and opportunistic (Yu, Keown, & Jacobs, 1993), with convenience being the deciding
factor. Samples can differ in terms of demographic characteristics, in terms of
environmental characteristics, and in terms of respondents levels of experience
related to both their work history and to their exposure to certain measurement
instruments. All of these differences become a concern when they are not relevant to the
research topic being studied. Demographic differences
in gender, age, education, and marital status can all be sources of unwanted variation. In
trying to match organizational structures for research design purposes, researchers are
often forced to sacrifice demographic proportions across two or more cultures (Chikudate,
1997). Environmental characteristics become a concern in cross-cultural research when
differences exist in terms of social, economic, legal, education, and/or industry
structures (Janssens, Brett, & Smith, 1995). We found that cross-cultural studies
sometimes fail to control for these types of factors. Finally, differences in
respondents experience levels can be
problematic. Because cross-cultural studies often involve highly dissimilar groups (Van de
Vijver & & Leung, 1997), respondents from different samples are likely to have
different levels of past work experience, in terms of both tenure and breadth of exposure
to different functional areas (e.g., Schneider & De Meyer, 1991). In addition,
differences may exist in terms of respondents experience levels with measurement
instruments and with general testing procedures, both of which can be undesirable
performance-related sources of variation (Van de Vijver & Leung, 1997).
assure sample equivalency, researchers should try to minimize the effects of these sample
differences in their cross-cultural studies. If samples cannot be matched on the basis of
some predetermined variables, then the variables should be controlled for methodologically
(Sekaran & Martin, 1982; Hudson, Baraket, & LaForge, 1959).
important issue related to contextual alignment is whether the administration of surveys
is consistent across the different research settings. This involves technical equivalence,
and includes establishing equivalent data collection methods, instrument formats,
instrument scaling procedures, and survey-timing across the samples (Ortega & Richey,
1998; Sekaran & Martin, 1982; Yu et al., 1993). For example, if items on a written
instrument were read to a sample of respondents in one culture (because of literacy
levels), and administered in the standard way in another culture, the measurement
reliability and validity of the study could be compromised, thus making an interpretation
of the results difficult (Ortega & Richey, 1998). Similarly, when surveys are
administered to each sample at different time periods, the sample of respondents receiving
the later survey might be subject to a higher
attrition rate (Kok et al., 1995), and the results of the study could be distorted because
of a category of respondents that were present in the first sample, but less
representative in the second sample.
the purposes of establishing contextual alignment in cross-cultural research, we
identified the following best practices related to sample equivalence and technical
equivalence. First, as mentioned before, efforts should be made to match samples on the basis of demographics,
environmental factors, and levels of experience. However, researchers may not always be
able to use a matching strategy, since resources and subjects have varying degrees of
availability, and since different cultural groups may sometimes have contrasting profiles
in terms of these characteristics (Van de Vijver & Leung, 1997). Therefore,
researchers should statistically control for any differences that remain between the
samples (e.g., Peterson et al. 1995). Second,
researchers can use insiders and outsiders perspectives together to help
identify some problematic contextual issues. For example, researchers might work together
with top executives from a different culture to clear up ambiguities about items on an
employee survey (see Johns & Xie, 1998). This type of interaction between outsiders
and insiders can be particularly helpful in alleviating some of the problems that could
lead to respondents feelings of uneasiness with the interventions. Third, explicit
instructions and examples should be included in all cross-cultural survey instruments, and
these should be provided to each of the samples in a consistent manner. For example, if an
instrument is translated into another language for a sample, then the instructions should
also be translated. Finally, instruments can be used in pilot studies, when possible, to
help identify contextual problems. These studies can help researchers identify unforeseen
issues related to survey administration, such as translation problems and specific
ambiguities associated with item phraseology (see Smith, Peterson, & Wang, 1996;
Kanter & Corn, 1994; Gowan & Ochoa, 1998).
Stage 3: Validation of the Research Instruments
The final stage in
our framework involves the validation of the research instrument. Researchers must ensure
that the measures of a construct developed in one culture can be applied to another
culture before they can establish a basis for theoretical comparisons. For this objective,
establishing both semantic equivalence and measurement equivalence are essential. In
establishing semantic equivalence, the goal for the researcher is to ensure that the
multiple versions of a self-report instrument used cross-culturally fully account for
linguistic differences among the samples. The main concern should be for the meaning of
each item after translation to be consistent across the different respondents from each
culture. This is rarely an easy task. Even in situations where researchers and linguists
work together to produce a common version of an instrument for a cross-cultural study,
there is still the possibility that remaining differences in meaning will have an
influence on some of the studys findings (Holtzman, 1968). For measurement equivalence to be established, constructs and their
meanings should apply equally across the different cultures being studied (conceptual
equivalence), and respondents across different cultures should be consistent in their
interpretations or calibrations of the scoring formats (scaling equivalence) (Riordan
& Vandenberg, 1994).
We identified the
following best practices for establishing semantic equivalence. First, researchers should
employ back-translation when they intend to administer an instrument to respondents who
speak a foreign language. In this process, bilingual experts translate the instrument from
language A to language B, and then back again to language A (Ortega & Richey, 1998).
The purpose of this double translation is to allow experts to examine each survey item on
both versions to establish meaning conformity. If inconsistencies are found, then items
can be reworded or, if necessary, eliminated. Second, researchers should avoid using
certain figures of speech, terminologies, or phrases in their survey instruments that may
be common in the home-base culture, but unfamiliar to other cultures. This may be
particularly important when the second culture is English-speaking, and is responding to
an English version of the survey. For example, respondents from non-U.S. cultures may
interpret the phrase, I put everything I have into my work, in a number of
different ways. Does the phrase refer to how much effort you put forth while doing your
job, or does it mean taking all of your possessions and applying them to the work you do?
Third, cross-cultural researchers need to explicitly describe the procedures they used to
establish semantic equivalence. Most of the studies in our review included statements
about measurement equivalence, while only some mentioned semantic equivalence. In order
for cross-cultural studies to be properly evaluated and replicated, these kinds of
statements become necessities.
measurement equivalence, we refer to two best-practice statistical approaches that have
been previously established by researchers. These are 1) covariance structure analysis
(e.g., Yang et al., 2000; Cheung & Rensvold, 1999; Ryan et al., 1999; Riordan &
Vandenberg, 1994), and 2) item response theory (e.g., Butcher & Han, 1996; Hambleton
& Kanjee, 1995; Ellis, Becker, & Kimmel, 1993; Hulin & Mayer, 1985).
Typically, researchers have used a multiple-groups covariance structure analysis (if
comparing more than two samples) to examine measurement equivalence, because such an
analysis allows for direct testing of equivalency assumptions through a series of nested
constraints placed upon selected parameters across the samples (Riordan & Vandenberg,
1994; Ryan et al., 1999). Measurement equivalence, including both conceptual and scaling
equivalence, can be examined in a series of increasingly restrictive hypothesis tests.
Cross-cultural researchers have normally determined measurement equivalence by observing
the same number of constructs and items loading on a factor, along with an invariance of
factor loadings (Ryan et al., 1999). Importantly, these approaches to examining
measurement equivalence allow the researchers to specify constraints a priori, with some
theoretical justification for proceeding with the analyses (Vandenberg & Lance, 2000).
Riordan & Vandenbergs (1994) examination of
three work-related measurement instruments across samples of Korean and American
employees is an example of this covariance structure analytic approach.
Another approach for
dealing with measurement equivalence, and for identifying items that do not function
similarly across different cultures, is to use statistical methods based on item response
theory, or IRT (Ellis et al., 1993). IRT is a theory-grounded process that models the
distribution of respondents success at the item level (Fan, 1998). This process
produces item statistics independent of respondent statistics, and person statistics
independent of the survey items administered. This invariance property of the theory has
made it possible to solve important measurement problems that have been difficult to
address with other frameworks, and it has established the basis for theoretically
justifying the use of IRT models (Fan, 1998). The models generated from this process
describe the relationship between a respondents observable response to an item and
the respondents standing on the unobservable trait measured by the survey instrument
(Ellis et al., 1993). An item characteristic curve (ICC) can then be used to display this
relationship, showing the response probability as a function of the trait measured by the
instrument. When ICCs estimated separately for the same item for two samples are the same,
the item is said to function equivalently for both groups, and when the ICCs differ by
more than sampling error, then there exists what is called differential item functioning,
or DIF (Ellis et al., 1993; Hambleton & Swaminathan, 1985; Hulin, Drasgow &
Parson, 1983; Lord, 1980; Thissen, Steinberg & Wainer, 1988, 1989). DIF is an
indication of a lack of measurement inequivalence for a particular item in a survey. DIF
items should therefore not be used to compare samples in cross-cultural research, because
such comparisons would be based on response tendencies rather than on true differences in
the construct of interest.
international focus in organizational research has required many researchers to apply
commonly used survey instruments to new cross-cultural settings. This paper has reviewed
some of the complexities involved in administering these instruments to culturally diverse
samples. Specifically, we have presented three stages of the research process, and for
each stage we have identified some best practices which are meant to deal with these
complexities. These best practices will hopefully be employed by researchers as a
checklist for verifying the validity and methodological soundness of their cross-cultural
for Best Practices
Z., Kanungo, R.N. & Sinha, J.B. (1999). Organizational culture and human resource
management practices: The model of culture fit. Journal of Cross-Cultural Psychology,
S., Inkpen, A.C. & Phatak, A. (1998). Are Japanese managers more long-term oriented
than United States managers? Management International Review, 38: 239-256.
R. & Elsayed-Elkhouly, S.M. (1998). Cultural differences between Arabs and Americans:
Individualism-collectivism revisited. Journal of Cross-Cultural Psychology, 29(3):
P.C., Conger, A.J.,Hau, K., Lew, W.J.F., and Lau, S. (1992). Development of the
multi-trait personality inventory (MTPI): Comparison among four Chinese populations. Journal
of Personality Assessment, 59(3): 528-551.
N., (1997). Exploring the life-world of organizations by linguistic oriented phenomenology
in sub-cultural analysis of organizations: A comparison between Japanese and U.S. Banks. Management
International Review, 37(2): 169-183.
R.K. & Kosinski, F.A. (1999). The role of affective dispositions in job satisfaction
and work strain: Comparing collectivist and individualist societies. International
Journal of Psychology, 34(1): 19-28.
A.T. & Katigbak, M.S.(1988). The emic strategy in the identification and assessment of
personality dimensions in a non-western culture. Journal of Cross-Cultural Psychology,
P.C. (1994). Self or group? Cultural effects of training on self-efficacy and performance.
Administrative Science Quarterly, 39: 89-117.
J., Dobbins, G.H., & Cheng, B. (1991). Cultural relativity in action: A comparison of
self-ratings made by Chinese and U.S. workers. Personnel Psychology, 44, 129-147.
J., Earley, P.C. & Lin, S. (1997). Impetus for action: A cultural analysis of justice
and organizational citizenship behavior in Chinese society. Administrative Science
Quarterly, 42(3): 421-444.
E.R. & Shapiro, D.M. (1998). Management and ownership effects: Evidence from five
countries. Strategic Management Journal, 19, 533-553.
M.A. & Ochoa, C.L. (1998). Parent-country national selection for the maquiladora
industry in Mexico: Results of a pilot study. Journal of Management Issues, 10,
W.B., Gao, G., Nishida, T., Nadamitsu, Y. & Sakai, J. (1992). Self-monitoring in Japan
and the United States. In S. Iwawaki, Y. Kashima & K. Leung (Eds.), Innovations in
Cross-Cultural Psychology: Selected Papers from the Tenth International Conference of the
International Association for Cross-Cultural Psychology. Berwyn, PA: Swets &
J. & Wan, C.K. (1986). Cross-cultural methods for the study of behavioral decision
making. Journal of Cross-Cultural Psychology, 17(2): 123-149.
M., Brett, J.M., & Smith, F.J. (1995). Confirmatory cross-cultural research: Testing
the viability of a corporation-wide safety policy. Academy of Management Journal,
G. & Xie, J.L. (1998). Perceptions of absence from work: Peoples Republic of
China versus Canada. Journal of Applied Psychology, 83, 515-530.
R.M. & Corn, R.I. (1994). Do cultural differences make a business difference?
Contextual factors affecting cross-cultural relationship success. Journal of Management
Development, 13, 5-23.
K.I., Park, H. & Suzuki, N. (1990). Reward allocations in the United States, Japan,
and Korea: A comparison of individualistic and collectivistic cultures. Academy of
Management Journal, 33(1): 188-198.
K.G., Bishop, R.C., Heinisch, D.A. & Montei, M.S. (1994). Selection across two
cultures: Adapting the selection of American assemblers to meet Japanese job performance
demands. Personnel Psychology, 47: 837-846.
A.C. & Helmreich, R.L. (1996). Human factors on the flight deck: The influence of
national culture. Journal of Cross-Cultural Psychology, 27(1): 5-24.
J.F., Nason, S., Lowe, K., Kim, N. & Huo, P. (1995). An empirical study of performance
appraisal practices in Japan, Korea, Taiwan, and the U.S. Academy of Management Journal,
Best Paper Proceedings: 182-186.
M.W., Williams, K.Y., Leung, K., Larrick, R., Mendoza, M.T., Bhatnagar, D., Li, J., Kondo,
M., Luo, J. Hu, J. (1998). Conflict management style: Accounting for cross-national
differences. Journal of International Business Studies, 29(4): 729-748.
C.W., Iverson, R.D. & Jo, D. (1999). Distributive justice evaluations in two cultural
contexts: A comparison of U.S. and South Korean teachers. Human Relations, 52(7):
J. (1993). Executive reward systems: A cross-national comparison. Journal of Management
Studies, 30(2): 261-280.
M.F., Smith, P.B., Bond, M.H., & Misumi, J. (1990). Personal reliance on alternative
event-management processes in four countries. Group and Organization Studies, 15,
M.F., Smith, P.B., Akande, A., Ayestaran, S., Bochner, S., Callan, V., Cho, N.G., Iscte,
J., DAmorim, M., Francois, P., Hofmann, K., Koopman, P.L., Leung, K., Lim, T.K.,
Mortazavi, S., Munene, J., Radford, M., Ropo, A., Setiadi, B., Sinah, T.N., Sorenson, R.,
& Viedge, C. (1995). Role conflict, ambiguity, and overload: A 21-nation study. Academy
of Management Journal, 38, 429-452.
M.F. & Smith, P.B. (1997). Does national culture or ambient temperature explain
cross-national differences in role stress? No sweat! Academy of Management Journal,
D.A., Gustafson, D.J., Elsass, P.M., Cheung, F., & Terpstra, R.H. (1992). Eastern
values: A comparison of managers in the United States, Hong Kong, and the Peoples
Republic of China. Journal of Applied Psychology, 77, 64-671.
C.M. & Vandenberg, R.J. (1994). A central question in cross-cultural research: Do
employees of different cultures interpret work-related measures in an equivalent manner? Journal
of Management, 20, 643-671.
A.M., Chan, D., Ployhart, R.E., & Slade, L.A. (1999). Employee attitude surveys in a
multinational organization: Considering language and culture in assessing measurement
equivalence. Personnel Psychology, 52: 37-58.
S. & Jeong, I. (1994). Cross-cultural research in advertising: An assessment of
methodologies. Journal of the Academy of Marketing Science, 22(3): 205-217.
S.C.& DeMeyer, A. (1991). Interpreting and responding to strategic issues: The impact
of national culture. Strategic Management Journal, 12, 307-320.
U. & Martin, H.J. (1982). An examination of the psychometric properties of some
commonly researched individual differences, job, and organizational variables in two
cultures. Journal of International Business Studies, Spring/Summer, 51-65.
P.B., Trompenaars, F. & Dugan, S. (1995). The Rotter locus of control scale in 43
countries: A test of cultural relativity. International Journal of Psychology,
P.B., Dugan, S. & Trompenaars, F. (1996). National culture and the values of
organizational employees: A dimensional analysis across 43 nations. Journal of
Cross-Cultural Psychology, 27(2): 231-264.
P.B., Peterson, M.F., & Wang, Z.M. (1996). The manager as mediator of alternative
meanings: A pilot study from China, the USA and UK. Journal of International Business
Studies, 27, 115-137.
D.D. & Gilliland, S.W. (1996). Fairness reactions to personnel techniques in France
and the United Sates. Journal of Applied Psychology, 81, 134-141.
B.W. & Everett, J.E. (1984). Response styles in a cross-cultural managerial study. The
Journal of Social Psychology, 122: 151-156.
S. & Guss, D. (1999). The fate of the Moros: A cross-cultural exploration of
strategies in complex and dynamic decision making. International Journal of Psychology,
M. B. & Von Glinow, M.A. (1997). Human resource management in cross-cultural contexts:
Emic practices versus etic philosophies. Management International Review, 37 (1)
special issue: 7-20.
de Vliert, E. & Van Yperen, N.W. (1996). Why cross-national differences in role
overload? Dont overlook ambient temperature! Academy of Management Journal,
Dyne, L., & Ang, S. (1998). Organizational citizenship behavior of contingent workers
in Singapore. Academy of Management Journal, 41, 692-703.
J.L. (1996). Karaseks model in the Peoples Republic of China: Effects of job
demands, control, and individual differences. Academy of Management Journal, 39,
N., Chen, C.C., Choi, J. & Zou, Y. (2000). Sources of work-family conflict: A
Sino-U.S. comparison of the effects of work and family demands. Academy of Management
Journal, 43(1): 113-123.
J.H., Keown, C. F., & Jacobs, L. W. (1993). Attitude scale methodology: Cross-cultural
implications. Journal of International Consumer Marketing, 6(2): 47-64.
J.W. (1989). Imposed etics-emics-derived etics: The operationalization of a compelling
idea. International Journal f Psychology, 24(6): 721-735.
J.W. (1990). Imposed etics, emics, and derived etics: Their conceptual and operational
status in cross-cultural psychology. In Headland, T.N., Pike, K.L., & Harris, M.
(Eds.). Emics and Etics: The Insider/Outsider Debate. Newberry Park, CA: Sage.
J.N., Han, K. (1996). Measures of establishing cross-cultural equivalence. In Butcher, J.N
(Ed.). International Adaptations of the MMPI-2: Research and Clinical Applications.
Minneapolis, MN: University of Minnesota Press.
D.T. & Stanley, J.C. (1963). Experimental and quasi-experimental designs for research
on teaching. In N.L. Gage (Ed.), Handbook of Research on Teaching. Chicago: Rand
P.C. & Rensvold, R.B. (1999). Testing factorial invariance across groups: A
reconceptualization and proposed new method. Journal of Management, 25, 1-27.
T.D. & Campbell, D.T. (1979). Quasi-Experimentation: Design and Analysis Issues for
Field Settings. Boston, MA: Houghton Mifflin.
B.B., Becker, P., & Kimmel, H.D. (1993). An item response theory evaluation of an
English version of the Trier Personality Inventory (TPI). Journal of Cross-Cultural
Psychology, 24, 133-148.
X. (1998). Item response theory and classical test theory: An empirical comparison of
their item/person statistics. Educational and Psychological Measurement, 58,
W.B. (1997). Cultural variability in communication. Communication Research, 24 (4):
R.K. & Kanje, A. (1995). Increasing the validity of cross-cultural assessments: Use of
improved methods for test adaptations. European Journal of Psychological Assessment,
R.K. & Swaminathan, H. (1985). Item Response Theory: Principles and Applications.
W.H. (1968). Cross-cultural studies in psychology. International Journal of Psychology,
B.B., Baraket, M.K. & LaForge, R. (1959). Problems and methods of cross-cultural
research. Journal of Social Issues, 15(3): 5-19.
C.L., Drasgow, F., & Parsons, C.K. (1983). Item Response Theory: Applications to
Psychological Measurement. Homewood, IL: Dow Jones Irwin.
C.L., & Mayer, L.J. (1985). Psychometric equivalence of a translation of the Job
Descriptive Index into Hebrew. Journal of Applied Psychology, 71, 83-94.
R.M., Heeren, T.J., Hooijer, C. & Dinkgreve, M.A. (1995). The prevalence of depression
in elderly inpatients. Journal of Affective Disorders, 33, 77-82.
F. (1980). Applications of Item Response Theory to Practical Testing Problems.
Hillsdale, NJ: Lawrence Erlbaum.
U. & Smith, P.B. (1994). Who are we and where are we going? JCCP approaches its 100th
issue. Journal of Cross-Cultural Psychology, 25, 25-53.
D.M. & Richey, C.A. (1998). Methodological issues in social work research with
depressed women of color. Journal of Social Service Research, 23(3-4): 47-70.
S. and Shenkar, O. (1988). Clustering variables: The application of nonmetric multivariate
analysis techniques in comparative management research. International Studies of
Management & Organization, 18(3): 72-87.
D., Steinberg, L., & Wainer, H. (1988). Use of item response theory in the study of
group differences in trace lines. In H. Wainer & H.I. Braun (Eds.), Test Validity
(pp. 147-170). Hillsdale, NJ: Lawrence Erlbaum.
D., Steinberg, L., & Wainer, H. (1989, September). Detection of differential item
functioning using the parameters of item response models. Paper presented at the
Differential Item Functioning Conference, Educational Testing Service, Princeton, NJ.
de Vijver, F. & Leung, K. (1997). Methods and Data Analysis for Cross-Cultural
Research. Thousand Oaks, CA: Sage.
R.J., & Lance, C.E. (2000). A review and synthesis of the measurement invariance
literature: Suggestions, practices, and recommendations for organizational research. Organizational
Research Methods, 2, 4-69.