Authors must work hard to make their results easy to follow. Thus, tables must be comprehensible by themselves. That is, a reader should be able to understand a table without reference to the text (and preferably without reference to other tables). The converse rule is that the text should stand by itself, even if a reader ignores the tables.
Variable names should be self-explanatory. The text should flow without ever showing that the reader has encountered a variable name. For example, "Each plant visit by a social scientist corresponded to 3.1% higher voluntary turnover per month (SE = .9%, P < .05)" is much better than "The coefficient of SCISTUD on VOLTUR was .031 (.009)," which is better than "S and V were significantly related." In short, nobody should know what passed between you and your computer in the middle of the night. Name your dummy variable Male or Female, but not Sex.
Often variables are proxies for conceptual variables. If you want to measure complexity, but your variable is log(employment), name the variable employment (if all variables are expressed in logs) or log(employment), not complexity.
Each table has number and a meaningful title: "Wages Do Not Rise with Tenure," not "Wages."
All statistical tests include the name of the test, the test statistic, and the significance level. Include a brief description of complex procedures at the bottom of the table. Remind readers of the meaning of complicated variables.
Each estimated equation lists the dependent variable and sample size. If one of these is constant for an entire table, list it only once. Always include measures of goodness of fit such as R² or log-likelihood. Usually include a test of the significance of the entire regression (such as an F test for ordinary least squares), and appropriate residual diagnostics (such as heteroskedasticity tests).
Coefficients should include standard errors or t statistics, and some indication of statistical significance (usually P values or * for P < .05, ** for P < .01). A note explains whether there are standard errors or t statistics. Report P < .10 only if the sample size is small. For very large datasets it makes sense to demand a higher degree of certainty, so only report P < .01.
Include a meaningful number of digits, usually at two or three significant places. Neither .34849560 (3.132534) nor 0.00 (SE = 0.00, P < .001) is useful. Sometimes multiply or divide a variable by 10 so the coefficient is in a meaningful range.
Always in the text, and often in the tables, translate the coefficients of interest into words. For example, convert logit coefficients to measure how a one-unit change in the variable affects the probability of the occurrence of the event (dP/dX). Translate hazard rate coefficients to show how a one unit change in the variable affects the expected time until some event occurs: "If the union local is in a city, its expected lifetime was four months (12 percent) shorter than if it was rural."
Also, always explain whether the effect is large or not, perhaps by comparing its size to something readers understand. For example, the importance of marital status on wages can be expressed as how many years of education are required to raise wages by as much. Sometimes standardized coefficients (expressed in standard deviation units) are useful: "A one month increase in training (approximately one standard deviation) corresponds to 4 percent lower turnover (approximately one half of a standard deviation)."
Effect sizes from many nonlinear models such as Lisrel, factor analysis, path analysis, ordered logits, etc., are hard to understand. Thus, explain the size as well as the statistical significance of the important effects.
Statistical significance: Most authors focus on whether coefficients are statistically significantly different from zero at the 5 percent level. Thus, if b1 is statistically significant from zero at the 5% level, but b2 is not, some authors write as if b1 is different from and more important than b2. At a minimum, authors should focus on confidence intervals, and explicitly test if b1 and b2 differ by statistically significant amounts.
Summary statistics should be included for the referee, although they do not always need to be in the paper. Usually include means, standard deviations, units, sources, geographical region covered, and sample size. Usually include summary statistics for control variables as well as for the main variables of interest. Include the statistics for the un-logged version of any important variables you use in log form. If your paper computes important and meaningful intermediate results (residuals from a first-stage equation, for example), include their summary statistics.
Often give the referee a table of correlations of either the main variables or of all the variables.
Use simple statistical procedures to complement fancier analyses. Many
papers have a simple comparison of means as their main result (to be formalized
and measured more precisely in later sections). If your paper has such a
comparison at its core, then present the unadjusted means (e.g., men vs. women,
or
Figures and graphs, like tables, are supposed to be comprehensible without reference to the text, and the text without reference to the figure. Place labels (in words, not symbols or variable names) and units on axes.
Equations worthy of being in your paper can be expressed as a sentence. Do so. Many economic theories generate an interesting first-order condition. Restate it in words: "At the optimum, a dollar spent today must bring the same utility as a dollar spent tomorrow, suitably discounted."
Abstracts present the question the paper will answer. Since social scientists are not mystery writers, abstracts also summarize the results. Inability to write an abstract of fewer than 100 words often indicates deeper problems with the paper.
No research is perfect. This section presents a checklist of common imperfections in empirical research. Discuss the limitations of your paper and plausible alternate explanations for the results.
Surveys face problems with the match between the actual sample and the perfect sample. Always mention the response rates and possible response biases. Are responses likely to be sensitive to changes in the wording of the questions? Do respondents have incentives to hide their attitudes? Are attitudes likely to match behaviors?
A naming issue appears when data undergo factor analysis or are summed into an index. Do not merely name the factors or indices, but present a sample of the questions along with the factor weights. Differentiate exploratory from confirmatory factor analysis.
Econometric and statistical analyses are subject to all the difficulties of surveys. They also fall prey to the "textbook chapters:" simultaneity, sample selection, measurement error, outliers, and so forth. Almost all analyses should use methods that adjust for clustering of the sample (as in most common datasets) and that are robust to the presence of heteroskedasticity. Re-analyze the data several times to check the specification, and make sure that your results are not sensitive to changes in the estimating technique, functional forms, control variables, and so forth. If all is well, allude to these results briefly, but do not include them in the paper. Often include copies of these tables for the referee.
Experiments almost always have questionable external validity. Is there any reason to believe people in actual organizations involved in complex social networks making decisions that affect their careers over many years act the way the college sophomores do in your one-hour treatment?
Is the name you have given your treatment appropriate? If you tell one group of people they are not worth the wage they receive, this treatment may affect self-esteem as well as perceived equity. Thus, naming the treated group perceived inequity can mislead readers.
Case studies have limited generalizability. Explain in what sense this case study generalizes, and outline the limits of its generalizability. What are the likely biases due to subjective perception and reporting?
Causality is almost never clear in the social sciences. Common problems include reverse causality, selection effects, and omitted variable bias. Discuss alternative causal explanations for the results. These alternative channels often suggest additional tests you might perform.
When using instrumental variables or multi-equation methods such as two-stage least squares and maximum likelihood, always list your instruments or other identifying assumptions. Include these lists in both the text and the tables. Justify why it is plausible that your instruments are exogenous. Both identification by functional form (as in many sample selection models) and instrumenting with lagged variables require strong justification.
Economic theory: Explain the robustness of the results in "assumption space." Differentiate assumptions that are crucial from those that merely simplify. Even if you do not prove, explain what happens if preferences are not Cobb-Douglas, or if agents are not identical. Usually include detailed derivations in an appendix for the referee.
Go for clear exposition of substance as opposed to trying to impress readers with big words, hard statistics, or fancy theory. Present the main conclusions so that they will be accessible to the nonspecialist reader in the final section.
Use footnotes sparingly, and Latin almost never. Don't rely on variable names if you can spell them out. Remind readers often of the meaning of your symbols: often replace “q” with "the elasticity of substitution." Readers have to expand out the entire phrase--help them out.
No article has ever suffered from too much helpful advice from colleagues. Be sure many readers look at your article. Use family, friends, colleagues from other departments: most of your paper should be understandable even to the nonspecialist.