Special Issue Co-editors:
• Thomas Fischer, Geneva School of Economics and Management, University of Geneva, firstname.lastname@example.org
• Donald C. Hambrick, Smeal College of Business, The Pennsylvania State University, email@example.com
• Gwendolin B. Sajons, ESCP Business School, firstname.lastname@example.org
• Niels Van Quaquebeke, Kühne Logistics University & University of Exeter, Niels.Quaquebeke@the-klu.org
Similar to the field of psychology (Baumeister, Vohs & Funder, 2007), the fields of management and leadership have lost sight of studying what individuals in organizational and institutional settings actually do in terms of their behaviors; they also miss the mark in measuring individuals’ veritable feelings or thoughts. How should we study individuals, whether leaders, followers, observers, or any kinds of social agents in terms of their behaviors (true actions and choices)as well as their psychological states (attitudes, perceptions, preferences, and feelings)?
Modeling what individuals actually do as well as their true states must lie at the core of any social science striving to understand causal links that can meaningfully inform theory and practice. That several branches of social science research have gone astray is largely due to the deep-rooted, almost unquestioned dependence on questionnaires as a research tool, despite the substantial doubt regarding the extent to which questionnaires capture real actions or states (e.g., Hirsch & Levin, 1999; Hansbrough, Lord, & Schyns, 2015). This ritualized overreliance—and increasing proliferation—of questionnaires has severely limited what social scientists understand about actual organizational dynamics (Alvesson, 2020). Questionnaire ratings depend on much more than what individuals actually do, think, or feel. Indeed, unmodeled variation at various levels (e.g., target, observer, and context) severely confounds measurement (Alvesson & Einola, 2019; Gottfredson, Wright & Heaphy 2020). Moreover, questionnaire responses are often elicited in contrived informational settings driven by social desirability, bear no actual social or economic costs, or are used in hypothetical scenarios that do not capture real-world dynamics (Antonakis, 2017; Baumeister et al., 2007).
However, there is still hope: Leadership (and management more broadly) can be a science, and researchers demonstrated this fact already in the first half of the 20th century when they developed a remarkable variety of approaches for empirically studying leadership. For instance, Kurt Lewin and colleagues (1939) pioneered field experimentation and manipulated leadership styles, Preston and Heintz (1949) used laboratory experiments to study the different effects of participatory and supervisory leadership, and Bales (1950) developed a behavioral coding scheme to examine leader emergence in initially leaderless groups. Despite their methodological heterogeneity, all these approaches to studying leadership have one thing in common: they experimentally manipulated or objectively measured leader behaviors.
Since then, much of social science research has pivoted and the use of questionnaire measures to capture what individuals do, think or feel is strongly ingrained in today’s research practices (Eden, 2020; Hunter, Bedell-Avers, & Mumford, 2007). Despite, or maybe because of their prevalence, questionnaire measures are seldom called into question. Yet, numerous limitations make their use highly problematic. The following list summarizes some of the problematic issues:
1. Abstraction. Questionnaire measures do not actually measure individual behaviors but only perceptions thereof. Moreover, such measures are oftentimes cognitively abstracted; the measurement items do not ask for concrete behavioral acts but rather try to capture broader concepts (as discussed in Van Quaquebeke & Felps, 2018). Moreover, the predictor is oftentimes conceptually close to the outcome or simply restates it, which precludes refutation (Alvesson, 2020; Antonakis, 2017; Wicklund, 1990).
2. Arbitrary measurement and bad concepts. It is common for researchers to use arbitrary metrics in questionnaires, which may engender meaningless results (Bass, Cascio, & O’Conner, 1974; Borsboom, Mellenbergh, and van Heerden, 2004; Edwards, Younge, and Long, 2020).
3. Demand effects. Respondents’ answers to questionnaire items are prone to demand effects and social desirability because they do not capture social dynamics that often bear costs and reactions (Roth and Slotwinski, 2020; Zizzo, 2010). Even more pernicious, simply measuring perceptions of behaviors via questionnaires can make the study purpose apparent to respondents and drive them to answer in a way they deem in line with the researchers’ hypotheses (Lonati, Quiroga, Zehnder, & Antonakis, 2018; Zizzo, 2010). Moreover, the way and the order in which questions are asked strongly affect how participants respond (Schwarz, 1999).
4. Confounded variance. Perceptual measures are often biased by multiple confounds (Hansbrough, Lord, & Schyns, 2015) including performance-cue effects and implicit expectations (Eden & Leviatan, 1975; Lord, Binning, Rush & Thomas 1978; Rush, Thomas & Lord, 1977).
5. Endogeneity. When explanatory or mediating variables are captured via questionnaire measures, whether in static or longitudinal settings, they very likely share common causes with the dependent variables they are meant to predict and subsequent statistical analyses preclude establishing causality (Fischer, Dietz, Antonakis, 2017; Sajons, 2020).
6. Non-consequential outcomes. Questionnaires are often used to measure outcome variables. However, this measurement practice can compromise validity especially when questionnaires are used to measure actions (i.e., behaviors or decisions). Actions are neither hypothetical nor inconsequential; but questionnaire responses are. Hence, it is unclear to which degree questionnaires are suitable for capturing real actions (cf. Lonati et al., 2018).
Consequently, voices raising concerns have become louder over the last years. For instance, Blom and Alvesson (2015) suggest that actual leader behaviors underlying much of leadership research are unclear, and van Knippenberg and Sitkin (2013) bemoan that transformational leadership confounds behaviors with their effects. Several other critiques have emerged recently (e.g., Alvesson, 2020; Alvesson & Einola, 2019; Gottfredson, Wright, & Heaphy, 2020). The common underlying pattern of all these critiques is the following: Questionnaire measures of leadership do not capture what leaders actually do or what their observers actually see, feel, or think, or how they would actually react to a certain leader behavior or information environment in a given context. Yet, researchers in management and applied psychology mostly ignore these “inconvenient truths.”
Questionnaires, however, do have some utility. They can capture easily verifiable or objective data (e.g., demographics), can measure inherently perceptual and evaluative constructs (e.g., fairness perceptions) as dependent variables, or useful for descriptive purposes (e.g., comparing how employee job satisfaction changes over time without aiming to infer cause or effect). Furthermore, questionnaire measures can also be used as explanatory variables if these measures are appropriately “instrumented” (as in instrumental-variable regression, Sajons, 2020). Nevertheless, the mostly inappropriate, dependence on questionnaires has steered the editorial policy of this journal to seek other ways to study individual behaviors, particular when modeled as independent variables (Antonakis, 2017).
It is high time to move beyond the ritualized reliance on questionnaire measures and capture actual behaviors, emotional or cognitive states—ecologically valid ones—in more rigorous ways. Turning to advancement of methodologies can enable a field to take a major leap forward. For instance, the systematic use of experimental and quasi-experimental methods has spurred a “credibility revolution in the fields of labor, public finance, and development economics” (Angrist & Pischke, 2010, p. 26). For the fields of management and leadership in the 2020s, the widespread rediscovery of experimentation, quasi-experimentation, and natural experiments in conjunction with the use of objective measures as standard research practice could be a similarly promising pathway (Eden, 2017, 2020; Podsakoff & Podskaoff, 2019; Sajons, 2020; Sieweke and Santoni, 2020). Decades ago already, scholars managed to meticulously manipulate and objectively measure leader behaviors; hence, there is no reason to believe that nowadays there is no feasible alternative to questionnaires. Even more so, the recent and ongoing improvements in technology open many additional possibilities to measure behavior objectively (Wenzel & Van Quaquebeke, 2018). Examples include eye-tracking data (Gerpott, Lehmann-Willenbrock, Silvis, & Van Vugt) and computer-based coding of communication (Guo, Yu, & Gimeno, 2017). The use of deep neural networks (see LeCun, 2015) is particularly underutilized in this regard.
Against this background, this Special Issue is dedicated to reinvigorating and advancing investigations into actual behaviors and psychological states manipulating or objectively measuring them so that our field can take a similar leap forward. Contributions may include (but are not limited to) the following types of theoretical, methodological, or empirical work in a social science (e.g., leadership or management more broadly)—context:
1. Studies developing clear definitions of important constructs that were heretofore typically captured via questionnaire measures but can be manipulated and objectively coded. The substantive change in the conceptualization of charismatic leadership and how it can be experimentally manipulated is an illustration of such an approach (e.g., see Meslec, Curseu, Fodor, & Kenda, 2020, who also employ consequential outcomes).
2. Studies systematically comparing empirical analyses based on questionnaire measures to those based on unobtrusive or archival measures (e.g., observing real behavior). Chatterjee and Hambrick’s (2007) work on non-questionnaire-based measures of narcissism is a case in point.
3. Studies systematically identifying types of questionnaire measures that are less prone to (conscious or unconscious) misreporting. For instance, Gioia and Sims (1985) developed a measure of leader behaviors that is not prone to bias due to performance cues and people’s implicit leadership theories.
4. Studies using direct observational measures or real-time measures of behavior, archival data, and neurophysiological measures for variables relevant to the study of leadership or other social science phenomena (e.g., Antonakis, 2017; Gerpott et al., 2018, Wenzel &Van Quaquebeke, 2018).
5. Eliciting preferences and attitudes by using list experiments (Blair & Imai, 2012) and randomized response protocols (Greenberg et al., 1969), which are useful in measuring true states in contexts where social desirability and social norms may constrain true responses.
6. Experimental research in the field or laboratory to highlight biases, demand effects, or endogeneity issues originating from questionnaire measures, as well as the use of game-theoretic designs that consider costs and benefits of choices and actions (Zehnder, Herz, & Bonardi, 2017).
7. Studies purging questionnaire measures that serve as explanatory variable from endogeneity by instrumenting them with experimentally randomized instrumental variables (Meslec et al., 2020; Sajons, 2020) or measured variables that are exogenous (Cavazotte, Moreno, & Hickmann, 2012), whether in laboratory or field settings.
8. Reviews of research to identify problematic areas in studying real behaviors and psychological states and to chart new territory for researchers. Such reviews could serve as go-to guides for scholars developing their studies and seeking to jettison questionnaires.
9. Theoretical articles that contribute to the development of constructs and measures that foster the study of real behaviors in situ. Conceptual work on event-based approaches to studying leadership and management are a case in point (e.g., Hoffman & Lord, 2013; Morgeson, Mitchell, & Liu, 2015).
For this Special Issue, the submission process is two-stage. Authors can submit their manuscripts in the form of 10-page proposal double-spaced (plus references, tables, figures, and other end matter not included in the 10-page maximum), starting from 1 December 2020 but no later than the submission deadline of 26 February 2021, online via The Leadership Quarterly’s Editorial Manager submission system at https://www.editorialmanager.com/LEAQUA/default.aspx. If authors submit full manuscripts, these will be rejected. The proposal must carefully describe how the planned manuscripts can contribute to the special issue and provide a detailed summary of the contribution. Authors of reports that are accepted will be invited to contribute a full manuscript within six months (more time can be provided if required for data-gathering). To ensure that all manuscripts are correctly identified for consideration for this Special Issue, it is important that authors select “SI: Beyond Questionnaires” when they reach the “Article Type” step in the submission process. Manuscripts should be prepared in accordance with The Leadership Quarterly’s Guide for Authors available on the journal web page. All submitted manuscripts will be subject to The Leadership Quarterly’s double blind review process.
Authors should carefully consult the recent editorial statement by the journal editors: http://doi.org/10.1016/j.leaqua.2019.01.001 . Manuscripts that do not adhere to the editorial mission of the journal will be rejected.
Research data forms the backbone of research articles and provides the foundation on which knowledge is built. Researchers are increasingly encouraged, or even mandated, to make research data available, accessible, discoverable and usable. Although not mandatory, the journal encourages authors to submit their data at the same time as their manuscript. Further information can be found at: https://www.elsevier.com/authors/author-services/research-data.
Alvesson, M. (2020). Upbeat leadership: A recipe for—or against—“successful” leadership studies. The Leadership Quarterly.
Alvesson, M., & Einola, K. (2019). Warning for excessive positivity: Authentic leadership and other traps in leadership studies. The Leadership Quarterly, 30(4), 383-395.
Angrist, J. D., & Pischke, J.-S. (2010). The credibility revolution in empirical economics: How
better research design Is taking the con out of econometrics. Journal of Economic
Perspectives, 24(2), 3-30
Antonakis, J. (2017). On doing better science: From thrill of discovery to policy implications. The Leadership Quarterly, 28(1), 5-21.
Bales, R. F. (1950). Interaction process analysis: A method for the study of small groups. Cambridge, Massachusetts.: Addison-Wesley.
Bass, B. M., Cascio, W. F., & O’Conner, E. J. (1974). Magnitude estimations of expressions of frequency and amount. Journal of Applied Psychology, 59(3), 313-320.
Baumeister, R. F., Vohs, K. D., & Funder, D. C. (2007). Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on Psychological Science, 2(4), 396–403
Blair, G., & Imai, K. (2012). Statistical analysis of list experiments. Political Analysis, 20(1), 47-77.
Blom, M., & Alvesson, M. (2015). All-inclusive and all good: The hegemonic ambiguity of leadership. Scandinavian Journal of Management, 31(4), 480-492.
Borsboom, D., Mellenbergh, G. J., & Van Heerden, J. (2004). The concept of validity. Psychological Review, 111(4), 1061.
Cavazotte, F., Moreno, V., & Hickmann, M. (2012). Effects of leader intelligence, personality and emotional intelligence on transformational leadership and managerial performance. The Leadership Quarterly, 23(3), 443-455.
Chatterjee, A., & Hambrick, D. C. (2007). It’s all about me: Narcissistic chief executive officers and their effects on company strategy and performance. Administrative Science Quarterly, 52(3), 351-386.
Eden, D. (2017). Field experiments in organizations. Annual Review of Organizational Psychology and Organizational Behavior, 4, 91-122.
Eden, D. (2020). The Science of Leadership: A Journey from Survey Research to Field Experimentation. Forthcoming at The Leadership Quarterly.
Eden, D., & Leviathan, U. (1975). Implicit Leadership Theory as a Determinant of the Factor Structure Underlying Supervisory Behavior Scales. Journal of Applied Psychology, 60(6), 736-741.
Edwards, J. R., Younge, A., & Long, E. C. (2018). Arbitrary metrics in industrial and organizational psychology research. Paper presented at the 2018 annual meeting of the Society for Industrial and Organizational Psychology, Chicago, IL.
Fischer, T., Dietz, J., & Antonakis, J. (2017). Leadership process models: A review and synthesis. Journal of Management, 43, 1726-1753.
Gerpott, F. H., Lehmann-Willenbrock, N., Silvis, J. D., & Van Vugt, M. (2018). In the eye of the beholder? An eye-tracking experiment on emergent leadership in team interactions. The Leadership Quarterly, 29(4), 523-532.
Gioia, D. A., & Sims Jr, H. P. (1985). On avoiding the influence of implicit leadership theories in leader behavior descriptions. Educational and Psychological Measurement, 45(2), 217-232.
Gottfredson, R. K., Wright, S. L., & Heaphy, E. D. (2020). A Critique of the Leader-Member Exchange Construct: Back to Square One. The Leadership Quarterly.
Greenberg, B. G., Abul-Ela, A.-L. A., Simmons, W. R., & Horvitz, D. G. (1969). The Unrelated Question Randomized Response Model: Theoretical Framework. Journal of the American Statistical Association, 64(326), 520-539.
Guo, W., Yu, T., & Gimeno, J. (2017). Language and competition: Communication vagueness, interpretation difficulties, and market entry. Academy of Management Journal, 60(6), 2073-2098.
Hansbrough, T. K., Lord, R. G., & Schyns, B. (2015). Reconsidering the accuracy of follower leadership ratings. The Leadership Quarterly, 26(2), 220-237.
Hirsch, P. M., & Levin, D. Z. (1999). Umbrella advocates versus validity police: A life-cycle model. Organization Science, 10(2), 199-212.
Hoffman, E. L., & Lord, R. G. (2013). A taxonomy of event-level dimensions: Implications for understanding leadership processes, behavior, and performance. The Leadership Quarterly, 24(4), 558-571.
Hunter, S. T., Bedell-Avers, K. E., & Mumford, M. D. (2007). The typical leadership study: Assumptions, implications, and potential remedies. The Leadership Quarterly, 18(5), 435-446.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436-444.
Lewin, K., Lippitt, R., & White, R. K. (1939). Patterns of aggressive behavior in experimentally created “social climates”. The Journal of Social Psychology, 10(2), 269-299.
Lonati, S., Quiroga, B. F., Zehnder, C., & Antonakis, J. (2018). On doing relevant and rigorous experiments: Review and recommendations. Journal of Operations Management, 64, 19-40.
Lord, R. G., Binning, J. F., Rush, M. C., & Thomas, J. C. (1978). The effect of performance cues and leader behavior on questionnaire ratings of leadership behavior. Organizational Behavior and Human Performance, 21(1), 27-39.
Meslec, N., Curseu, P., Fodor, O. C., & Kenda, R. (2020). Effects of charismatic leadership and rewards on individual performance. The Leadership Quarterly.
Morgeson, F. P., Mitchell, T. R., & Liu, D. (2015). Event system theory: An event-oriented approach to the organizational sciences. Academy of Management Review, 40(4), 515-537.
Podsakoff, P. M., & Podsakoff, N. P. (2019). Experimental Designs in Management and Leadership Research: Strengths, Limitations, and Recommendations for Improving Publishability. The Leadership Quarterly, 30(1), 11-33.
Preston, M. G., & Heintz, R. K. (1949). Effects of participatory vs. supervisory leadership on group judgment. The Journal of Abnormal and Social Psychology, 44(3), 345.
Roth, A. & Slotwinski, M. (2020), Gender Norms and Income Misreporting Within Households, ZEW Discussion Paper No. 20-001, Mannheim. Retrieved from: http://ftp.zew.de/pub/zew-docs/dp/dp20001.pdf
Rush, M. C., Thomas, J. C., & Lord, R. G. (1977). Implicit Leadership Theory: A Potential Threat to the Internal Validity of Leader Behavior Questionnaires. Organizational Behavior and Human Performance, 20, 93-110.
Sajons, G. (2020). Estimating the causal effect of measured endogenous variables: A tutorial on experimentally randomized instrumental variables. The Leadership Quarterly. https://doi.org/10.1016/j.leaqua.2019.101348.
Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54(2), 93-105.
Sieweke, J., & Santoni, S. (2020). Natural experiments in leadership research: An introduction, review, and guidelines. The Leadership Quarterly, 31(1), 101338.
van Knippenberg, D., & Sitkin, S. B. (2013). A critical assessment of charismatic-transformational leadership research: Back to the drawing board? The Academy of Management Annals, 7(1), 1-60.
Van Quaquebeke, N. & Felps, W. (2018). Respectful inquiry: A motivational account of leading through asking question and listening. Academy of Management Review, 43, 5-27.
Wicklund, R. A. (1990). Zero-variable theories in the analysis of social phenomena. European Journal of Personality, 4(1), 37-55.
Wenzel, R. & Van Quaquebeke, N. (2018). The double-edged sword of Big Data in organizational and management research: A review of opportunities and risks. Organizational Research Methods, 21, 548-591.
Zehnder, C., Herz, H., & Bonardi, J.-P. (2017). A productive clash of cultures: Injecting economics into leadership research. The Leadership Quarterly, 28(1), 65-85.
Zizzo, D. J. (2010). Experimenter demand effects in economic experiments. Experimental Economics, 13, 75-98.