René Pellissier

René Pellissier

Assisted by Hester Nienaber, Nadine Lernhard, Filip Hamman and Zama Mkhize

Department of Business Management

University of South Africa

Pretoria , South Africa

Keywords: South African management research practices, management journals, research problem, research design, research ontologies, content analysis


In the management sciences, there are different approaches to research.  These depend upon the specific research questions and objectives and their impact upon the findings.  At post graduate level, the supervisor and the student need to be au fait with the full spectrum of research strategies and be able to link the appropriate design to the specific research question and objectives. More than that, qualitative and quantitative methods are viewed as opposites rather than strategies on  continuum.  This limits the research and its value and allows for the predisposition of the supervisor to influence the student’s work. In the past, it was believed that certain specialisations/functional areas in the management sciences use specific research strategies rather than employ the full spectrum available. This paper provides an objective view of the research strategies employed by matching these strategies to the field of specialisation and to the planned outcomes from the research.

This article looks at the five accredited management journals in South African during the period 2006/2007 in order to determine the link between the research objectives and the research design employed. The journal articles were studied using proportional stratified random sampling, the research design and strategies and the conclusion.  A model as presented by Scandura and Williams (2000) was selected to use.  However the model was not sensitive enough with respect to the vast range of inaccuracies picked up by the review team.  The model was thus reworked using a content analysis approach in conjunction with theory building and modelling. The reworked research metric was then used to study a total of 120 of the 208 articles over the time period under review. The stratification allows for possible differences between the journals and trends thus developed. From these findings it seems that the current review process is not as strict as we believe and/or the reviewers are either not strict enough or not strong enough in all areas of research design and research methodologies in order to see the gaps in the articles. Also the formats of the review forms and the time constraints we face may influence the important function of the review process.  Despite these, accredited outputs play an important role in funding and academic promotions.  These aspects should be more intensely investigated and, to this, it is our aim to continue with this study and also to compare the local contribution to their international bedfellows.

The value of the research on the research practices in the management sciences lies in its objective view of the strategies in use in the management journals of note and will thus enable the supervisor and the post graduate student to select the appropriate design from the broad range of possibilities rather than to stereotype method and specialisation. On a micro level, the research enhances research in the management sciences, whilst on the macro level; this adds value to the socio economic development of the country.


This article researches research practices in the field of management in South Africa.  Management scientists and researchers are particularly guilty of research outputs in a domain that begs for expansion because of its relevance and value added to society. It is thus appropriate to investigate the publications accepted by South African (accredited) management journals and match these to the pertinent research problems/hypotheses/questions investigated by the researchers in the various fields in the management domain. Academic journal accreditation is undertaken by the Department of Education (DoE) and involves a rigorous process of review.  Accredited journals have to follow a strict blind peer review process linked to the specific house style and specialisation or focus of the journal.  Accepted articles are subsidised by the DoE based on an internal and external review system.

It is common knowledge that academics are not participating sufficiently in (accredited) research articles. There are many reasons for this phenomenon.  For instance, increasing administrative and academic workloads (based on student number increases and academic number decreases), a lack of research support and, most importantly, a lack of understanding of research fundamentals specifically in the field of management research.  (This is a focal point of this article.) However, these do not negate the demand for (subsidisable) research outputs.

In researching research practices in the field of management, this article will thus investigate the research problem, the research design, the research domain and the research outcomes and cross tabulate these with respect to their interdependencies. The following questions arise: Is it possible that a specific field in the management sciences relate to specific research design? Is it possible that the choice of technique could influence the outcome of the research?  To what extent is the current body literature in the management sciences informed by inappropriate research design and subsequent outcomes? Design choices may, to some extent, impact upon the conclusion drawn and it is thus of extreme importance that the appropriate design is perfectly matched to the research problem and the specific domain in which the research is undertaken.  One immediate benefit from this research is the identification of specific training and development that need to be undertaken to increase research outputs in terms of quality (research value to the body literature) and quantity (research value to the researcher and his/her affiliation). Another beneficial aspect is the development of a conceptual framework for matching research design against the research problem and the research field, and the subsequent deeper understanding of the positivist and phenomenologist designs.  To this end, an in-depth analysis of quantitative and qualitative strategies seems logical and the development of the conceptual framework should provide an understanding of the critical aspects involved in management research.

Given the importance of management research in terms of the developmental aspects for society and theory, and the growing importance of publication for the sake of accredited outputs, the strategies and design techniques specifically relevant to the field of management theory and applications need to be investigated. Previous research (Scandura & Williams, 2007; Beaty, Nkomo & Kriek, 2006; Hitt, Beamish, Jackson & Mathieu, 2007; Matzler & Renzl, 2005) seemed to focus on limited aspects of management research whereas our preliminary research work indicated a lack of structure, lack of planning and logic and little linkage between the research problem, the research design and the research outcomes.  The Beaty, et al. research (2006) stresses the need for the building of management theory in the South African context, whereas generally formulations are still done in the US or Western Europe.  The sample under investigation was a selection of DoE accredited management journals between 1994 and 2004. Using an archival review technique, they used the Mendenhall, Beaty and Oddou (1993) classification consisting of qualitative, quantitative, conceptual or joint research. Their survey found that quantitative research designs were the most prevalent (54%) and joint research designs the least (3%). We were unsure whether their results followed from a random sample which would invalidate the Chi-Square results offered. Their research could not link the research problem, research domain, research levels or research outcomes.

This leads us to our research objectives:  The primary research objective is: To determine possible interdependencies between the research problem (in management research) and the research design and to develop a conceptual framework to review management research in terms of quality and value added to the body literature.  The secondary research objectives are to: Investigate the interdependencies between the research ontologies, strategies, levels, domains, analysis techniques and the purposes of the research.  The research employs a cross-sectional study of a randomly selected sample of the five accredited general management journals in South Africa over the period 2006/2007. The research follows on a previous study undertaken by Scandura and Williams (2007) in which they researched practices, trends and implications in research methodologies in management. The research will follow a content analysis approach used on a random sample of all articles published in the domain of management research in South Africa. Specific hypotheses can then be investigated based on these outcomes.


This article investigates the implementation of research fundamentals and we thus need to put research itself in context. There are numerous definitions for research. Kerlinger (1986:6) writes that (scientific) research is ‘the systematic, controlled, empirical, and critical investigation of natural phenomena guided by theory and hypotheses about the presumed relations among such phenomena. Leedy and Omrod (2005:19) concur: [scientific] ‘research is the systematic process of collecting and analysing information to increase our understanding of the phenomenon being studied.’ It is the function of the researcher to contribute to the understanding of the phenomenon and to communicate that understanding to others. Research is systematic (which implies the use of thorough and rigorous planning and procedures), controlled (which implies objectivity and consistency in the sense that results should be replicable by other researchers under the same conditions and in the same way), empirical (which implies that the research is grounded in reality), critical and analytical (which implies a probing process to identify problems and exacting methods to arrive at their solutions), logical (which implies arguments and conclusions that follow rationally from the evidence obtained) and theoretical and conceptual (which implies a grounding in conceptual and theoretical structures that direct the research). Our investigation will therefore have to explore the systematic process of research, review the critical and analytical aspects involved and the theoretical and conceptual aspects as well as the structures that direct the research in the management research conducted. In the planning phase, the research problem is matched to an appropriate research strategy that leads to a specific analysis framework and required outputs. These outputs are then assessed for quality and its contribution using rigorous peer review systems and happen through publication in accredited journals or some other formal review process. There are specific outcomes to be measured against. The broad aims of scientific research are typically accepted to be the understanding, explanation, prediction and control of natural phenomena.

Most research can be categorised as belonging to one of the following categories (Rosenthal and Rosnow, 1991:3): Pure research (leading to theoretical development whether there are practical output or not), applied research (intended to solve a specific problem and of which the common form is evaluation of a particular course of action) and action research (where the main focus is that the research itself must lead to change). Researchers in the social sciences use a continuum of ontological approaches to ensure viability and integrity of their work. These range from the purely quantitative and traditional approach to the qualitative interpretive approach depending on our perceptions of reality.  Pope and Mays (1995:42) provide another perspective: ‘[research is] an overstated dichotomy between quantitative and qualitative social science’, whilst Hussey and Hussey (1997) maintain that all research falls within a band of possibilities (positivist to phenomenologist) which will allow combinations form both on a continuum.

According to Babbie and Mouton (2001), the research design relates to the planning of scientific inquiry -that is, designing a strategy for finding something out. This entails specifying ‘what’, ‘why’ and ‘how’ the researcher needs to find out. They differentiate research designs on the basis of (i) ‘empirical’ (e.g. survey, experiments, secondary data analysis, observation, case study) vs. ‘non-empirical’; (ii) primary data vs. secondary data; and (iii) numerical vs. textual data sources. Furthermore, they refer to the research paradigm that can be quantitative (gain understanding by explaining human behaviour) as opposed to qualitative (gain understanding by describing human behaviour). Their use of the term ‘research paradigm’ corresponds to that of Shah and Gorley (2006). Research methodology is defined as the methods, tools, techniques and procedures employed in the process of implementing the research design (Babbie & Mouton, 2001). Bryman and Bell (2007) are of the opinion that research designs refer to different frameworks, such as experiments, survey, cohort studies, case studies and comparative designs, used for collecting and analysing data. They use the term ‘research strategy’ to indicate a general orientation to conduct research, which can be either quantitative or qualitative in nature. Neuman (2006) uses the phrase ‘strategies for designing a study’ and points out that there are differences between a qualitative and quantitative style of doing research, which have different design issues that need to be considered. He further differentiates between the deductive and inductive reasoning approach to research, which aids in planning data collection and analysis.


The primary research design in this research is content analysis in order to determine the presence of certain words or concepts within the publications under review. Content analysis requires manageable categories (for instance, themes, sentences or phrases) to be examined.  The research is based on the categories presented by Scandura and Williams (2007).  In their study, Scandura and Williams compared the strategies employed in management research in two periods: 1995-97 and 1985-87. These authors used the following framework to review research methodology practices and trends in management research.

Table 1: Review framework for research methodology practices and trends in management research
Primary research strategy    Quantitative, qualitative

Research strategies    Formal theories/literature reviews, sample surveys, laboratory experiments, experimental simulations, field studies (using primary/secondary data), field experiments, judgment tasks, computer simulations
Substantive content domains    Leadership, careers; international management; strategic management
organisation theory, organisational behaviour, HRM , research methods
Levels of analysis    Individual, group, organisational

Time frame of study (Internal validity)    Cross sectional, longitudinal
Economic sector from which sample drawn
(internal validity)    Private sector, non-governmental, public sector, governmental;
Non for profit, governmental & non governmental; Mixed; Not reported;
Student subjects
Occupation of subjects
(external validity)    Professional, Managerial, Manufacturing, Healthcare, Education, Blue collar, White collar, Technical, Students, Mixed
Nature of the construct validation (if any)
(construct validity)    Confirmatory factor analysis; Exploratory factor analysis; Discriminant analysis; Convergent/predictive validity; Convergent validity reports
Nature of the construct validation (if any)
Other reliability measures (if any)
Primary data type    Qualitative; quantitative
Data source    Single; multiple
Primary data analysis    ANOVA (MANOVA, MANCOVA)
Linear regression analyses (Simple, multiple, hierarchical)
Correlation techniques
Meta analysis
Linear techniques for categorical dependent variables
Computer simulation (e.g. Monte Carlo simulation)
Source: Scandura and Williams (2007)

Hofstede (1996) posits that research traditions and publication patterns are mainly attributed to cultural differences and the nationality of the author.  He uses the example of what an organisation is to illustrate this point (in the French tradition it is ‘power’, in the German one it is ‘order’, in the American it is ‘the market; and in the Nordic countries it is ‘equality’). Matzler and Renzl (2005) concluded that German researchers focused more on theory development, publishing a very high number of purely conceptual articles, whereas the American approach is more empirical.  According to them, American and European researchers frequently collaborate on an international and national level.  Beaty, Nkomo and Kriek (2006) selected what they believed to be the six primary research outlets in management in South Africa (South African Journal of Business Management, South African Journal of Labour relations, South African Journal of Industrial Psychology, South African Journal of Management Science, Management Dynamics and the Southern African Business Review), selected because of their clear editorial focus in publishing South African management-related manuscripts. They pointed out that South African management research also appears in international journals whose primary focus is publishing research from non-South African populations. These authors studied the belief that qualitative research always precedes quantitative research in order for the subject field to mature. They focused their study mainly on the application of qualitative or quantitative research or both.  Their archival approach (1994-2004) reviewed all articles in these journals categorising them as qualitative, quantitative, conceptual or joint research.  They concluded that most management research followed quantitative approach, followed by a conceptual approach, then qualitative and then a joint approach.  They concluded their study using tests for independence which of course is not appropriate as there is not evidence of a random sample (in their design it seems that either the population as a whole was used or the sample taken from the population is purposive not random, both cases exclude any inference).  It was not clear how the design was constructed.  However, the tallies obtained do seem indicative of their conclusion.

This research is based on the content analysis approach.  Berelson (1952) wrote that content analysis is the manual or automated coding of documents, transcripts, news articles, or even of audio of video media to obtain counts of words, phrases, or word-phrase clusters for purposes of statistical analysis. The researcher creates a ‘dictionary’ that clusters words and phrases into conceptual categories for purposes of counting. While generally focusing on the analysis of printed text and transcripts, DuRant et al. (1997) believe it applicable to any form of communication.  Berelson (1952) listed the main applications of content analysis: To describe trends in content over time, to describe the relative focus of attention for a set of topics, to compare international differences in content, to compare group differences in content, to compare individual differences in communication style, to trace conceptual development in intellectual history, to compare actual content with intended content, to name a few.  Krippendorf (2004) identifies five key processes inherent to content analysis: Unitizing: The researcher must establish the unit of analysis (word, meaning, sentence, paragraph, article, news clip, document, etc.). Sampling: usually the universe of interest is too large to study the content of all units of analysis, and instead units must be sampled. Sampling involves counting, which may require the researcher to develop thesauruses (so different terms with like meanings will be counted under the same construct) and expert systems or other rule engines (so the proper contextual valence is assigned to each counted construct). Reducing: content data must be reduced in complexity usually by employing conventional summary statistical measures. Coding and statistical analysis is covered by Hodson (1999). Inferring: contextual phenomena must be analyzed to provide the context for findings.  Narrating: conclusions in the content analytic tradition are usually communicated using narrative traditions and discursive conventions.

Brown (1969:21) wrote that content analysis is ‘at times, a scholarly discipline should examine itself.’ While Buboltz, Miller & Williams, 1999 and Hill, Nutt, & Jackson, 1994) believe that content analyses of journal articles provide an excellent avenue to review the state of affairs.  Content analysis as a review process can include the examination of the content of articles published in a journal across a specific time period (e.g. Buboltz et al., 1999; Nilsson et al., 2001; Pelsma & Cesari, 1989) or focuses on specific content areas, such as psychotherapy process (Hill, et al., 1994), school consultation (Alpert & Yammer, 1983), or racial/ethnic minority research (Perez, Constantine, & Gerard, 2000; Ponterotto, 1988).There are two broad types of content analyses: conceptual analysis and relational analysis. In the first, the existence and frequency of concepts (word or phrases) are established, whilst in the second, the relationships between the concepts are investigated.  The research design employed in this paper is summarised in the table below.

Table 2: Research design followed in this paper
Domain     Management sciences

Purpose    To establish the research dimensions of management research articles published in DoE accredited South African management journals and to determine whether  these articles comply with the requirements of being ‘scientific’
Epistemology    Phenomenological

Strategy     Theoretical/Conceptual

Sample     Proportional stratified random sampling

Analytical technique    Pilot study
Descriptive statistics
Chi-square tests for homogeneity
Multinomial logistic regression

Time frame    Cross sectional

Level    Organisational

Because journal articles tend to mirror the values and interests of authors, journal editors, and the field at large, content analyses offer insight into the values that drive scholarly activities during a certain period.  Nilsson, Love, Taylor and Slusher (2001) used content analysis to study the areas that received the most attention were academic/career, multicultural issues, symptoms/disorders, and counseling process in 1991 to 2001.


The research was undertaken in distinct phases. During the initiation phase, relevant literature was studied and the research introduced to the editors of the journals selected for the study. Thereafter the actual work took place in three stages (pilot study, redesign of framework and review of journals).

A team consisting of five researchers (reviewers) was formed to undertake the project. The five management research journals accredited by the DoE, were selected for the study. The research project was introduced to the editors of the selected management journals through a letter sent by electronic mail.  In a structured questionnaire, the editors were requested to provide information in terms of starting date, number of copies per year, number of copies published, number of submissions per issue, acceptance rate, reasons for non-acceptance, editorial policy, academic affiliation, cost per issue, price per issue, website address, ISSN number, scope of journal, date of accreditation, ratio of local to international content, and full text online. This initial phase allowed validation of the management journals in terms of accreditation, purpose and scope of journal and editorial policy (see Table 5).

Content analysis according to the identified themes (stages 1-3)
The project team conducted a thorough literature review into management research practices and the extant body theory.  The research population is taken as the total number of articles published in the five management journals during 2006/2007.  In the time period under investigation (2006/2007), there were a total number of 208 articles published.  The main thrust of the research was then undertaken in three progressive stages:  In stage one a pilot study was conducted using ten randomly selected articles from the research population. The sample results were used to make changes to the selected framework according to the diversity of the responses and the problems incurred with the framework limitations. In stage two, the Scandura and Williams (2007) framework was revised a according to the outcomes from the pilot study to the research framework that will follow all the stages of research.  This lead to the review framework for research practices in management publications in South Africa. In stage three, the four independent reviewers and the independent judge (acting as arbitrator), studied a random sample of 120 articles (using the five journals as strata), and reached agreement on the content analysis outcomes according to the proposed review framework. These stages will be discussed below.  All papers were read and marked independently according to the following scheme:
+     =    Did not say – we interpreted to be relevant
=    Analytical technique not fit for sample
*     =    Domain not specified explicitly
=       To be confirmed at meeting with arbiter
√     =       Agreement at meeting with four reviewers
√ √     =       Agreed upon at meeting with arbiter.

Stage 1: A pilot study of ten articles randomly selected from the population was undertaken and reviewed using the Scandura and Williams framework and to assist the four independent reviewers to understand the framework and identify problem areas.  This sample size is small because of the time taken to review the original round of articles. The result of the pilot study was unexpected as we realised that in most of the articles, the purpose, method, sampling and analytical technique were in conflict. The following is a brief list of inaccuracies found that lead to the revision of the framework (see Tables 4a and 4b): Purpose not clearly stated (hence it is not clear whether the design and methodology were in accordance with the purpose), population not mentioned, sampling procedure and technique not specified, domain not specified (e.g. I.T; Marketing; HR etc), in minority of cases authors mention the time frame e.g. cross sectional survey, in a limited number of cases authors mention that the study is quantitative/qualitative, in a few cases we question the explicitly mentioned option as the ‘content’ does not correspond to the option, in a number of cases the authors indicate that the purpose is to test for relationships (as such probability sampling is required to do inferential statistics) and though they use non-probability sampling (such as purposive or convenience sampling) and, if they do not explicitly state, it is implied that they generalise their findings. Only one case highlighted that a convenience sample was used and non-parametric statistics, but the findings were not generalised since a non-probability sample was used, prohibiting generalizations, the titles of the articles and the content did not correspond, the title and hypotheses tested did not correspond, the analytical technique applied did not achieve the required outcomes and authors used populations but employed inferential statistics.

The implication of these observations was that, either these articles were flawed and as such unreliable and invalid, rendering their contributions void, or that the framework used was not appropriate to determine the trends and linkages. It soon became apparent that the selected framework did not allow for a clear and unambiguous problem statement, research objectives, the clarity of the research design or process followed, the sampling design as matched to the objectives and/or the analytical techniques employed, or the conclusions drawn. In some instances the panel agreed that the research findings and conclusions had little bearing on the preceding objectives, designs or analytical techniques used. The framework by Scandura and Williams was thus expanded to include a detailed methodology review and issues of validity and reliability.  This framework was revised through a number of iterations until the panel felt that the research framework was now identifying all the dimensions of good research identified through the literature study.  These dimensions were identified as: research strategy, research domain, and research level and research technique.

Stage 2: The population was stratified according to the five management journals under investigation.  The second phase used proportional random selection of the 208 articles spread over the five journals (strata).  This was done as the initial investigation showed differences in terms of house style, purpose and themes, numbers of articles, acceptance rate, price per copy and distribution. The stratification allowed for the differences between the journals and also achieves a higher level of significance for the same sample size if done without stratification. Proportional random sampling was used as numbers of articles differed between the journals (see Table 3). In the proportional allocation the percentage of articles selected per stratum remain the same although the numbers of articles may be different between journals (58% of the population).

Table 3: Proportional stratification sampling scheme
Number of accredited articles published in 2006/2007 per journal    JOURNAL 1    JOURNAL 2    JOURNAL 3    JOURNAL 4    JOURNAL 5    TOTALS
Total in journals (Ni)    32    35    14    46    81    208 (N = ∑Ni)
Total in sample (ni)    18    20    8    27    47    120 (n = ∑ni)
% selected per journal    56.3    57.1    57.1    58.7    58.0    57.7

The sample size of n=120 was chosen and proportional random allocation employed using a random number generator. The 120 articles were grouped into the five heterogeneous groups based on the size of the population.  Thus the sample sizes of n1=18, n2=20, n3=8, n4=27 and n5=47, correspond proportionally to the sizes of the subpopulations (N1=32, N2=35, N3=14, N4=46 and N5=81).  All over, the sampling used around 58% of the population elements after the population had been grouped into units that belong (i.e. the specifics of the research journal, its style and requirements).

Stage 3: Four independent reviewers reviewed the 120 articles in the random sample according to the following scheme: each reviewer read seven articles per week (totalling 120 after 18 weeks), reached consensus every week. The group then presented findings (specifically disagreements) weekly to the fifth reviewer as arbiter for finality and agreement. In all cases, clear consensus was reached by the panel of five.  The selected articles were reviewed by the panel according to the amended framework as research framework. The research framework is presented in Tables 4a and 4b.  The outcomes of the agreed upon reviews were captured onto the data sheet summarised in Annexure A.


Neuendorf (2002) listed the following as not acceptable issues in research publications:  inadequate methods or explanation of methods; limited or misused data; inadequate theory; inappropriate journal; presentation and style; unacknowledged bias (ideological, author dominance, selective data to fit a favoured theory); inadequate knowledge, limited analysis; inadequate discussion and dubious ethics.  Following on the definition and process of research (Leedy & Omrod, 2005), the categories of research (Rosenthal & Rosnow, 1991), the research design (Babbie & Mouton, 2001), the issue of research paradigm and research methods (Shah & Gorley, 2006; Babbie & Mouton, 2001; Bryman & Bell, 2007 and Neuman (2006), the review framework of Scandura & Willams (2007) was expanded to make provision for an unambiguous research framework that can incorporate all aspects of the research as identified by the reviewers in the pilot study, and not only the research methods alone as per Scandura (2007).  Some of the comments provided by the four reviewers on the nature of the analytical techniques were: ‘The authors made use of a ‘hedonic pricing model’, a technique used exclusively in Econometrics. The authors made use of ‘Seasonal unit root tests’, ‘Cointegration analysis’ and ‘Granger causality’, again techniques used in the subject of Econometrics. Therefore I thought we need to make provision for these ‘domain-specific’ techniques. Descriptive statistics is also used in the article, but it’s only to show averages, not for the purpose of coming to the main conclusion. It also has occurred in a lot of the other articles. Shouldn’t we distinguish between ‘purposeful’ techniques and ‘side-issue/additional/supportive’ techniques?’

All articles selected were reviewed by the panel according to the following research framework criterion list:

Table 4a: Research framework for review of management research articles
ELEMENTS IN MANAGEMENT RESEARCH    Mark the most appropriate with an X

1. Ontology
a.    Positivism
b.    Phenomenology
c.    Both
2. Research Strategy
a.    Sample survey/interview
b.    Laboratory experiment
c.    Secondary data/literature review/archive
d.    Observation study
e.    Action research
f.    Grounded theory
g.    Experimental simulation
h.    Experiment, field study
i.    Grounded theory
j.    Computer simulation
3.  Research level

a.    Individual
b.    Group/team
c.    Organisation
d.    Industry
e.    Country
f.    Geographic region
4.  Research domain
AE =     Agricultural Economics
AU =     Auditing
BE =     Business Education
E =     Entrepreneurship/Innovation
EC =     Economics
EM =     Environmental Management
FM =     Financial Management
GBM = General Business Management
HRM = Human Resource Management
TC =    Telecommunications
LM =     Logistics Management
M =     Marketing
PAM =     Public Administration and Management
QM =     Quality Management
R =    Research
SBM =     Strategic Management
TM=    Tourism Management (including ecotourism)
TECH=    Technology (web-based; mobile; browser)
5.  Analytical technique (see Table 4b)
6.  Purpose of research
a.    Descriptive research
b.    Exploratory causal research
7.  Time frame
a.    Cross-sectional
b.    Longitudinal
c.    Triangulation

Table 4b: Research framework for the review of analytical techniques

1. Comparison of means    Probability    E.g. ANOVA, ANCOVA, MANOVA, MANCOVA, T-tests.
2. Regression analysis    Probability    Any type of linear or logistic regression.
3. Correlation analysis    Probability    Any type of correlation.
4.  Nonparametric analysis    Probability /
Non-Probability    Any type of nonparametric statistical technique.
Used in the case of small samples where no assumptions can be made
about the underlying distribution of the data
E.g. Sign test, Wilcoxon signed rank test, Wilcoxon rank sum test, Spearman’s rank correlation coefficient, Kolmogorov-Smirnov test, Mann-Whitney test, Kruskal-Wallis test, Friedman test.
5.  Descriptive statistics    Probability /
Non-Probability    Measures of location: Arithmetic mean, Geometric mean, Median, Mode, Quantiles.
Measures of spread: Range, Interquartile range, Quartile deviation, Standard deviation, Variance.
Measures of symmetry and kurtosis: Pearson coefficient of skewness,Filip Hammann Bowley’s coefficient of skewness, Coefficient of kurtosis.
Exploratory data analysis: Stem-and-leaf display, Box-and whisker plot.
6.  Analysis of contingency tables    Probability    E.g. Linear models, Log-linear analysis, Chi-Square independence test, Chi-Square goodness-of-fit test (large samples), McNemar’s test.
7. Factor-analytic and clustering techniques    Probability    E.g. Exploratory Factor Analysis (EFA), Principal Component Analysis (PCA), Cluster Analysis, Multidimensional Scaling, Discriminant Analysis, Correspondence Analysis.
8. Structural equation modelling (SEM) and path-analytic techniques    Probability    LISREL (Linear Structural Relationships), Path Analysis.
Software package: AMOS
9. Time-dependent techniques    Probability    Techniques for analysing data at different points in time or over a   specified time period
E.g. Time-series analysis, Survival Analysis, Change-point Analysis, Sensitivity Analysis.
10. Computer simulation    Probability    E.g. Monte Carlo simulation, Linear programming.
11. Quality control techniques    Probability    E.g. Total Quality Management (TQM), Process Capability Analysis.
Control charts: Run charts, Pareto charts, Attributes control charts, CUSUM charts.
12. Validity – and reliability tests a    Probability /
Non-Probability    Types of validity: Internal, External, Ecological, Population, Construct, Intentional, Representation, Content, Face, Observation, Criterion (Concurrent, Predictive), Convergent, Discriminant, Social, Nomological, Statistical conclusion validity.
Types of reliability: Inter-Rater/Inter-Observer, Test-Retest, Parallel-Forms, Internal Consistency.
Internal consistency measures: Cronbach’s Alpha, Average Inter-item Correlation, Average Item-total Correlation, Split-Half Reliability.
13. Meta Analysis b    Probability    Meta-analysis is a statistical technique for summarising and reviewing previous quantitative research. Selected parts of the reported results of primary studies are entered into a database, and this \”meta-data\” is \”meta-analyzed\”, in similar ways to working with other data by using descriptive and then inferential statistics to test certain hypotheses. The robustness of the main findings can be explored using sensitivity analysis.
E.g. Mantel Haenszel analysis, Peto analysis.
14.  Subject-related techniques    Probability/ Non-Probability    Techniques used exclusively in specific subjects
E.g. Boisot’s I-space model in Knowledge Management, Logistics Cost Model (LCM) in Logistics Management, Johansen Multivariate Co-integration in Econometrics
15. Qualitative techniques    Probability/ Non-Probability    Techniques used in qualitative research
E.g. Content analysis, Matrix analysis, Conceptual framework
16.  Other inferential statistics    Probability    Techniques which do not fall under numbers 1 to 16 above
E.g. Confidence intervals, Estimation methods


The design of this research allowed for reliability of the results.  The four reviewers had to separately read the articles in the sample, complete the revised framework (see Tables 4a and 4b) and take the necessary steps to agree on the final resolution.  The coders re-examined any coding disagreement and arrived at a final consensus coding (100% agreement) which provided the data for this analysis. They met weekly with the arbitrator (reviewer five) to ensure overall consensus. Agreement was only achieved when all reviewers agreed.

The categories and items in our research framework were drawn from previous literature in management research and refine the Scandura and Williams’s framework. We did experience some problems with the large number of domain categories and the then small sample drawn that resulted in small cell frequencies.  The problem with too few coding categories is that it potentially increases the likelihood of random agreement in coding decisions and subsequently results in an overestimation of reliability (Milne and Adler, 1999). Similarly, higher numbers of items in the instrument increase the complexity (Beattie and Thomson, 2007) and may potentially increase coding variance.


Table 5 summarises the results of the initial surveys sent to the editors of the journals to introduce the project.  In the instances where the editors did not provide information, these were extracted from the websites or from the hard copies of the journals provided.

Table 5:  Initiation phase: Results of the surveys conducted with the editors
Name of Journal    Number copies/year    Number of copies published    Number of submissions/issue    Acceptance rate    Cost/issue
(ZAR)    Price/issue (ZAR)
Journal 1    4    20    4    0.275    15000    100
Journal 2                                                        4    800    6    .    14000    64
Journal 3    .    .    13    0.6    0    0
Journal 4    3    23    8    .    0    0
Journal 5                                                   4    500    11    0.6    20000    60

Tables 6 and 7 summarise the empirical findings in the sample encountered by the reviewers.

Table 6: General comments made by reviewers

Purpose not stated    26
No indication of primary research strategy (quantitative/qualitative)    37
No indication of sampling design    11
Contradictions with regards to non probability or probability sampling techniques    13
Inappropriate use of inferential techniques (for instance factor analysis)
Invalid analytical technique used for chosen sampling method  (I added this but might have to be re-worded)    13
Incorrect research level (for instance indicate organisational but use industry or country setting)    Almost all the articles did not mention the level they used so we wrote what we thought was the correct level.
Unclear domain    13

The reviewers further observed that:
•    The domain was only specified in a number of cases while the readers had to interpret the remainder.
•    In 20 out of a possible 68 cases it was stated that the study was qualitative in nature, while the readers had to infer from the information provided that another 48 cases were qualitative in nature.
•    In 15 out of a possible 71 articles, the authors stated that the study was quantitative in nature, while the readers came to the conclusion that an additional 56 articles could have been quantitative in nature, given the information provided by the authors.
•    Only in one of the 120 cases the readers had to interpret the strategy, while the 119 other articles stated clearly the strategy (e.g. survey etc).
•    All authors clearly stated the analytical techniques used to analyse the data. These techniques were in line with the purpose of the study, but in a few cases these techniques were not appropriate to the sampling technique. This was especially applicable to research with a causal purpose that used a non-probability sampling technique.
•    None of the authors specified the level of the inquiry such as individual, organisation, industry or country. Readers had to infer this from the information provided.
•    Purpose not explicitly stated as exploratory, descriptive, causal or a combination.
•    Only 34 authors stated the time frame of the study as either cross sectional or longitudinal, while the readers had to infer in 86 cases based on the information provided in the article.
Furthermore, we found that sampling seemed the dimension that posed the major challenge.  This was especially true in cases where authors stated that the purpose of the study is causal. In 27 of the 44 articles with a causal purpose, the authors used a non-probability sample instead of a probability sample. The analytical techniques required to determine causality (e.g. comparison of means, regression analysis, correlation analysis, analysis of contingency tables, factor analysis and clustering techniques, structural equation modelling and path analytical techniques; time dependent analysis) required specific statistical assumptions that were neither stated nor adhered to (see Table 7).

Table 7: Dimensions of research encountered by the reviewers

Unsure    48/68    56/71    1    0    120    60?unsure    86    28 (44)
1    18/18    10/11    9/11    1/18    0    18/18    13/18    15/18    7/18;7/9
2    13/21    8/15    1/8    0    0    20/21    0    7/21    6/21; 6/6
3    8/8    7/7    1/1    0    0    8    1    5/8    1/1
4    UNSURE 11/36    9/14    18/21    0    0    ?3    ?14/21    18/21    4/36; 4/8 causal
5    Unsure 13    18/21    27/30    0    0    ?    32    41/46    10; 10/20

We subsequently looked at the research dimensions favoured by the journals. Table 8 contains a host of information that needs further examination. Only the dimensions critical to the validity and reliability (i.e. purpose, nature, strategy, sample) of the research are addressed. Firstly, the purpose of any published research needs to be noted as the purpose is the pivot of any article.  Journals 1 and 5 favoured articles with a causal purpose, while journal 2 favoured articles with an exploratory purpose, journal 3 descriptive, and journal 4 published an equal number of articles with an exploratory, descriptive and causal purpose. The research design follows the research purpose.  According to the literature (see for example Cresswell (2003), Neuman (2006)) research with an exploratory and/or descriptive purpose is qualitative in nature, while research with a causal purpose is quantitative in nature. Journals 2 and 3 favoured research that is qualitative in nature, while journals 4 and 5 favoured research that is quantitative in nature. Journal 1 published equal number of quantitative/qualitative research. Given that Journals 1 and 5 favoured Causal research, there should be a general tendency towards probability sampling (which is then not the case).

Table 8: Summary of research dimensions favoured by the DoE accredited management journals

Dimension    JOURNAL

1    2    3    4    5
Domain    HRM/IOP/OD; Marketing    Spread    HRM/IOP/OD    HRM/IOP/OD    Eco
Approach    Equal qn/ql    qn    ql    qn    ql
Strategy    Survey or survey combined with other    Survey or survey combined with other    Conceptual    Survey or survey combined with other    Survey or survey combined with other
Analytical technique    Statistical
Inference?    Descriptive stats    Descriptive stats; Conceptual    Stats/inference    Statistical/inference
Purpose    Causal    Exploratory    Descriptive    Combination    Causal
Time frame    CS    CS    CS    CS    CS
Sample    Non-probability    Non-probability    Not applicable    Non-probability    Non-probability
Qn = quantitative; ql = qualitative; CS = cross sectional

The recorded findings were captured on a data sheets (Annexure A) and analysed using SPSS (Statistical Package for the Social Sciences), release 16.0.1.   The data were obtained through a stratified random sampling technique which allows us to do inferential statistics.

Firstly, the Chi-square test for homogeneity was used to test whether the five journals differed with regard to each of the research variables (domain, qualitative/quantitative, strategy, analytical technique, level, purpose, timeframe and sample), on the 5% level of significance. A separate test was conducted for each variable in question. For all variables, except the timeframe of the study, some of the categories had to be merged to increase the expected cell frequencies (see annexure B). As a general rule, all expected frequencies should be at least five. But only eight articles were sampled from the Journal of Contemporary Management, causing very low observed (and expected cell) frequencies in the third column of the cross-tabulation tables. Therefore, to be more lenient, expected frequencies >2 were accepted. The expected cell frequencies < 2 are highlighted (see annexure B), in most cases this is only a very small percentage. Table 7 summarizes the results obtained for each of the eight research variables.

Table 9:  Summary of the results from the Chi-square test for homogeneity
Domain    49.4519    2.802 × 10-5    The five journals differ with regard
to the research domain.
Qualitative/Quantitative    12.4586    0.1319    The five journals do not differ with regard
to whether the study is qualitative or
quantitative (QL/QN).
Strategy    17.9343    0.1177    The five journals do not differ with regard
to the research strategy.
Analytical technique    26.3317    0.7488    The five journals do not differ with regard
to the analytical technique.
Level    25.2822    0.0135    The five journals differ with regard
to the research level.
Purpose    11.3151    0.5021    The five journals do not differ with regard
to the purpose of the research.
Timeframe    0.0131    1.0000    The five journals do not differ with regard
to the timeframe of the study.
Sample    13.5229    0.3322    The five journals do not differ with regard
to the sampling method used.

As can be seen from Table 9, it was found that the five journals showed homogeneity in terms of the research strategy, analytical technique, purpose of the study, timeframe of the study and the sampling method used. However, homogeneity between the five journals could not be confirmed in terms of the research domain and the research level.

Secondly, multinomial logistic regression was used to determine if one, or more, of the research variables are influenced by the other variables. That is, to identify potential response variables and its associated predictor variables, and then analysing the relationship between them.  As an initial intuitive guess, it was thought that the sample type could be influenced by the purpose of the study, the research strategy and whether the study is qualitative or quantitative in nature (QL/QN). Therefore, the sample type was identified as response variable with associated predictor variables being the purpose, strategy and QL/QN. The non-probability sample type was chosen as reference category, since it occurred the most.

The likelihood ratio test (Chi-square statistic = 51.085, p-value = 0.001) indicated that the framework is significant at the 5% level of significance, leading to rejection of the null hypothesis that all population regression coefficients, except the constant, are zero. Thus, at least one population regression coefficient is non-zero. The deviance goodness-of-fit test (Chi-square statistic = 95.953, p-value = 0.482) showed that the overall model fit is adequate. According to the likelihood ratio test for the individual model parameters, the research strategy (Chi-square statistic = 26.765, p-value = 0.002), as an independent variable, contributed significantly to the framework at the 5% level of significance. However, QL/QN (Chi-square statistic = 3.795, p-value = 0.704) did not make a significant impact, while the purpose of the study (Chi-square statistic = 15.958, p-value = 0.068) was found to be significant at the 10% level of significance.

Only three of the estimated regression coefficients were found to be significant at the 5% level of significance:  For the ‘population or N/A’ sample type, the ‘survey/interview’ (Wald statistic = 9.759, p-value = 0.002) and ‘mixed’ (Wald statistic = 4.342, p-value = 0.037) categories of the predictor variable ‘strategy’ and, for the ‘mixed/unsure’ sample type, the ‘survey/interview’ (Wald statistic = 3.928, p-value = 0.047) category of the predictor variable ‘strategy’.

Table 10: Parameter estimates as obtained from multinomial logistic regression

Probability    Strategy
Survey/Interview    -0.180    0.835    0.903
Grounded theory    -0.336    0.715    0.861
Mixed    0.403    1.496    0.791
QL/QN    Qualitative    -0.191    0.826    0.803
Quantitative    0.166    1.180    0.813
Purpose    Descriptive    0.175    1.191    0.898
Exploratory    0.795    2.213    0.506
Causal    1.905    6.717    0.113
Population or N/A    Strategy    Survey/Interview    -3.644    0.026    0.002
Grounded theory    -1.191    0.304    0.382
Mixed    -2.463    0.085    0.037
QL/QN    Qualitative    0.350    1.419    0.620
Quantitative    0.039    1.040    0.959
Purpose    Descriptive    -0.243    0.784    0.771
Exploratory    -0.701    0.496    0.371
Causal    -1.324    0.266    0.152
Mixed/Unsure    Strategy    Survey/Interview    -2.412    0.090    0.047
Grounded theory    -1.855    0.156    0.274
Mixed    -1.661    0.190    0.185
QL/QN    Qualitative    -0.319    0.727    0.697
Quantitative    0.643    1.901    0.404
Purpose    Descriptive    -0.604    0.546    0.532
Exploratory    -0.520    0.595    0.542
Causal    -0.977    0.376    0.305

From Table 10 it can b”

This entry was posted in Vol.2, No1/2010. Bookmark the permalink.