New paper · April 2026

From Dissertation to Publication: What Peer Review Actually Does to a Paper

Most published papers read as if they emerged fully formed: a clear research question, a well-motivated design, and clean results that align neatly with theory. Ours didn’t. This is the story of how a dissertation chapter became a journal article—and what happened to it along the way.

Our paper—Differential Pathways of Parenting Support: Exploring Head Start’s Stronger Effects on the Early Literacy Skills of Dual Language Learners—was just published in Early Childhood Research Quarterly. Co-authored with Soojin Oh Park at the University of Washington. You can read the full article for free until May 31, 2026.

What follows is two things at once: a summary of what we found, and a behind-the-curtain look at how we got there. I’m writing the second part especially for early-career researchers, because nobody told me what the process actually looks like when I was starting out.

The question

Head Start is the largest federally funded preschool program in the United States, serving nearly a million children from low-income families every year. Previous research has shown that Head Start improves children’s early literacy skills. But Soojin wanted to understand how—through what mechanism?

The question she asked was precise and original: does Head Start improve literacy by changing what parents do at home? And does this pathway work differently for children who are growing up learning two languages—dual language learners (DLLs)? This framing—parenting as a mediator, with DLL status as a moderator—was entirely Soojin’s. It came from her deep knowledge of bilingual family literacy practices and her conviction that DLL children’s developmental pathways are qualitatively different, not just quantitatively weaker.

Why it matters

About 30 percent of children in Head Start are dual language learners. These children are often treated as a monolithic group in research, but their experience of early education is fundamentally different. They are navigating two linguistic systems at once, and their families often engage in literacy practices that look different from what standardized assessments capture.

If Head Start works through parents—by encouraging them to read, tell stories, and engage in literacy activities with their children—then understanding whether this mechanism operates differently for DLL families isn’t just an academic question. It has direct implications for how programs should be designed and how resources should be allocated.

What we did

We used data from the Head Start Impact Study, a nationally representative randomized experiment with 4,440 children. The experimental design is important: because families were randomly assigned to Head Start or a control group, we can make stronger causal claims than most observational studies allow.

We used mediation analysis to decompose the total effect of Head Start on literacy into a direct effect and an indirect effect operating through parent-child literacy activities. Then we compared these pathways for DLL and non-DLL children.

What we found

Three findings stood out:

1. Head Start increases parent-child literacy activities. Enrollment in Head Start boosted family literacy engagement by 0.18 standard deviations. This means parents whose children attended Head Start read to them more, told them more stories, and engaged in more literacy-related activities at home.

2. These literacy activities mediate Head Start’s effect on skills. About 12 percent of the vocabulary gains and 11 percent of the decoding gains attributable to Head Start operated through this parenting pathway. The indirect effects were statistically significant for both outcomes.

3. The mediation was stronger for dual language learners. This is the headline finding. DLLs showed larger mediated effects than non-DLLs, particularly for decoding skills. In other words, the parenting pathway was a more important mechanism for DLL children than for their monolingual peers.

What this means

The practical implication is that investing in parenting support within Head Start may be especially effective for DLL families. If programs can strengthen the home literacy environment—not by imposing a one-size-fits-all model, but by working with the linguistic and cultural resources families already bring—the returns could be disproportionately large for children navigating two languages.

This is consistent with a broader shift in early childhood research away from deficit-based thinking and toward understanding the assets that multilingual families bring to their children’s development.

Where it started

The intellectual foundation of this paper belongs to Soojin Oh Park. The project began as part of her doctoral dissertation at Harvard, over a decade ago. Even then, the core insight was there: Head Start’s effects on DLL children might work through different mechanisms than for monolingual children. Soojin identified the dataset (HSIS), framed the research questions around parenting as a mediator, and built the theoretical scaffolding connecting bilingual family literacy practices to child outcomes.

What I want early-career researchers to notice is this: the seed of a good paper can exist for years before it finds its final form. Soojin’s dissertation planted that seed. What followed was a long process of figuring out how to grow it into something that could stand on its own in a peer-reviewed journal.

The ambition trap

Here is something nobody tells you when you start a dissertation: the instinct to be ambitious can work against you. Soojin’s original project was genuinely impressive in scope. She was doing causal mediation analysis. She was doing moderated mediation—testing whether the mediation pathways differed by DLL status. She was using multilevel structural equation modeling (MSEM), which is itself an enormously complex topic that entire textbooks are written about. And on top of all that, she was introducing the average causal mediation effect (ACME) framework to the early childhood education literature—something that had never been done in an ECE paper before—and comparing it to the MSEM results.

Any one of these threads could have been a paper on its own. The MSEM versus ACME comparison alone—showing how two different estimation strategies handle mediation with clustered data in a randomized experiment—would have been a meaningful methodological contribution. The applied findings about DLL families would have been a substantive contribution. But packed into a single manuscript, they competed for attention, and no single thread got the space it deserved.

I say this with deep respect for the ambition, because I had the exact same instinct as a young researcher. You want to show everything you can do. You want the paper to prove that you understand SEM, causal inference, moderation, mediation, and the substantive literature. The problem is that a paper isn’t a dissertation defense. A journal article needs to make one clear contribution. Readers—and reviewers—need to be able to say in a single sentence what your paper adds to the literature. When you cram three contributions into one paper, they can’t.

If you are writing your first paper right now, ask yourself: could each of my contributions stand alone? If yes, they probably should. Two focused papers will serve your career—and the field—better than one overloaded one.

How we started working together

Soojin and I had already been collaborating on other projects at the University of Washington. We published a paper in PLOS ONE applying machine learning and natural language processing to predict legislative success of early care and education policies—work I got to present at the SRCD conference in Washington, DC. We also worked together on the Professional Practice Innovation (PPI) project, a cross-case analysis of state prekindergarten quality improvement through research–practice partnerships, published in the International Journal of Early Childhood Education.

So by the time Soojin asked me to help bring the HSIS paper to publication, we had a working relationship built on complementary strengths. Soojin brought the deep substantive knowledge of child development, bilingualism, and family engagement. I brought the econometric and causal inference perspective—the obsession with identification assumptions, the insistence on being explicit about what you can and cannot claim from the data.

First submission: ECRQ, then a redirect

This is where the scope problem became visible. Soojin first submitted the paper to Early Childhood Research Quarterly—the journal where it would eventually be published. But the ECRQ editors looked at the manuscript—with its MSEM, its ACME, its method comparison—and felt the methodological contribution was outside their reviewers’ scope. This is an applied early childhood journal. They publish papers about children, families, and programs. They don’t typically adjudicate disputes between structural equation modelers and potential outcomes theorists.

So ECRQ suggested Soojin try a more methods-oriented journal first. The paper went to the Journal of Applied Developmental Psychology.

This is something that doesn’t show up in the final publication, and it’s a lesson about journal scope that nobody teaches you explicitly: every journal has an implicit contract with its readers about what kind of contribution it publishes. When your paper straddles two genres—applied findings and methodological innovation—it can fall through the gap between them. The applied journal doesn’t have reviewers for the methods. The methods journal doesn’t care about your substantive question. You end up homeless.

JADP: a split decision

At JADP, the paper received two very different reviews. Reviewer #2 was supportive and constructive—no major concerns about the study design or analyses, with specific suggestions about reconciling the offer-to-treat framing with participation-based interpretations, tightening the title, and cleaning up terminology. The kind of feedback you can work with.

Reviewer #1 was a different story. They answered “No” to nearly every checklist question—novel contribution, scientific soundness, methodology, writing quality. Their concerns ranged from the definition of the DLL population to the measurement of the parent-child literacy construct to the limitation that language assessments were only in English. Some of these were legitimate and would eventually improve the paper. Others felt like a mismatch between what the paper was trying to do and what the reviewer wanted it to be.

The associate editor offered a revision but was candid: she wasn’t sure the construct measurement issues could be resolved, and even a revised manuscript could be rejected after a second round. This is a genuinely difficult moment in academic publishing. You have one supportive reviewer, one hostile one, and an editor who is being honest that the path forward is uncertain.

Two lessons here for early-career researchers: a split decision is not a death sentence—it often just means the paper hasn’t found its audience yet. And not all reviews are equally helpful. One reviewer might give you a roadmap for improvement; another might give you a list of objections without indicating which ones are dealbreakers. Learning to distinguish the two is a skill nobody teaches you in graduate school.

Rethinking, not just revising

This is the point where I came on board. Rather than respond to the JADP reviewers point by point, we made a strategic decision: rethink the paper from scratch. The dual-method approach—presenting both MSEM and ACME results—had been creating confusion since the first ECRQ submission. It was time to choose.

We dropped the MSEM entirely and focused on a more transparent causal mediation approach grounded in the potential outcomes framework. This forced us to be much more explicit about identification assumptions—particularly around the mediator–outcome relationship, which is not protected by random assignment even in an experimental design like HSIS. We also sharpened the research questions, clarified the distinction between total effects and mediated effects, introduced a more structured decomposition of parent–child literacy activities into code-focused and meaning-focused components, and expanded the analysis of heterogeneity across DLL and non-DLL children.

The paper stopped trying to be a methods comparison and became what it always should have been: an applied paper about how Head Start works differently for different families. If you are an early-career researcher reading this: simplification is often progress. A clearer, more defensible model usually beats an ambitious one that nobody—including the authors—can fully explain.

Back to ECRQ

We resubmitted the restructured manuscript to Early Childhood Research Quarterly—the journal that had redirected Soojin years earlier. This time, with a focused applied contribution and a cleaner analytical framework, the paper landed squarely in ECRQ’s wheelhouse.

The difference in the review process was striking. The ECRQ reviewers were detailed, technically informed, and—most importantly—actionable. Let me give you some concrete examples, because I think they illustrate what good peer review actually looks like.

Reframing noise as substance. Our estimates for DLL children had wider confidence intervals than for non-DLL children. We had treated this as a statistical limitation—smaller sample, noisier estimates. One reviewer pushed back: maybe the “noise” wasn’t noise at all. DLL families are not a monolithic group. They vary in generational status, Spanish proficiency, cultural practices. The wider intervals might reflect genuine heterogeneity within DLL families, not just imprecision. This reframing didn’t just change a paragraph—it changed how we thought about the findings.

A new analysis we hadn’t planned. The same reviewer asked us to formally test whether the mediation effects actually differed between DLL and non-DLL children, rather than just comparing estimates by eye. This led us to run 1,000 bootstrap iterations computing the difference in ACME between the two groups—a completely new analysis that wasn’t in the original submission. The bootstrap test confirmed a statistically significant difference for decoding skills, turning what had been an informal comparison into a rigorous finding.

A simple addition that changed the story. Another suggestion was to add treatment-on-the-treated (TOT) estimates alongside our intent-to-treat (ITT) results. This was straightforward to implement using two-stage least squares, and the result was revealing: the first-stage F-statistic was 3,679 (astronomically strong compliance), and the TOT effects were roughly 40 percent larger than the ITT effects. One additional analysis, one sentence of code—and the paper became substantially more informative about what Head Start actually does for the children who attend.

Testing the limits of our own claims. A reviewer pointed us to a sensitivity analysis framework by Keele, Tingley, and Yamamoto for assessing how robust our mediation results were to unobserved confounding. We ran it, and the answer was sobering: the mediation effects were sensitive to even modest confounding (a correlation of about 0.1 between unmeasured confounders of the mediator–outcome relationship). This didn’t invalidate our findings, but it forced us to reframe them as suggestive rather than definitive—which is the honest way to present causal mediation results when the sequential ignorability assumption cannot be tested directly.

These were not vague objections. They were the kind of comments that make you think, “Yes, that is exactly what this paper needs.” Good reviewers are collaborators in disguise. The best feedback doesn’t just critique—it shows you how to make the paper better. The ECRQ reviewers did that.

What changed

The second round of reviews pushed the paper even further. One reviewer noticed that the factor loadings on our parent–child literacy activities scale were uneven—code-focused items (like teaching letters and numbers) loaded more heavily than meaning-focused items (like reading stories together). Rather than treating this as a measurement nuisance, they suggested we decompose the scale into subscales and see if the mediation pathways differed. This observation spawned an entirely new research question—what became RQ4 in the final paper—and one of its most interesting findings: the parenting pathway operated differently depending on what kind of literacy activity parents were doing.

Another reviewer caught something we had missed: our Figure 2 showed overlapping confidence intervals between DLL and non-DLL groups, but Table 5 reported a statistically significant bootstrap difference between them. The visual and the statistical test were telling different stories. Rather than try to explain the discrepancy, we removed the figure entirely. Better to have one clear representation than two contradictory ones.

There were also hard truths. A measurement invariance test across DLL groups showed poor fit (RMSEA = .145), meaning we could not assume the literacy activities scale was measuring the same construct the same way for DLL and non-DLL families. This didn’t kill the paper, but it required us to add careful language about interpreting cross-group comparisons with caution. The reviewers also pushed us to document missing data patterns by DLL status, leading to a new supplemental table showing that missingness was similar across groups for key variables—a small addition that meaningfully strengthened the paper’s credibility.

By the time of the final revision, the paper had changed substantially from its original form. The analytical framework was more focused. The interpretation was more cautious. The contribution was more clearly articulated. What began as a general question about program impacts evolved into a more precise insight: similar overall effects can arise through different behavioral pathways, and understanding those pathways requires careful attention to both measurement and heterogeneity. That sentence didn’t exist in the first draft. It emerged from the process.

How collaboration works

One thing that is rarely discussed openly is how authorship and contribution actually work in research collaborations. In our case, Soojin is first author—and rightly so. She conceived the project, identified the research gap, built the theoretical framework, and led the writing throughout. She is the one who understands the child development and bilingual education literature at a depth that I cannot match.

My contribution was primarily methodological. I brought the causal inference framework, pushed for analytical clarity, ran the mediation analyses, and helped translate the findings into language that would satisfy both developmental psychologists and econometricians. I also did much of the heavy lifting on revisions—responding to reviewer comments, restructuring sections, rewriting the methods and results.

This kind of complementary collaboration—where one person brings the substantive expertise and another brings the methodological toolkit—is common in applied research but rarely made visible. If you are a graduate student looking for collaborators, find someone whose strengths compensate for your gaps. The best papers often come from these pairings.

For early-career researchers

A few lessons from this process that I wish someone had told me earlier:

Simplification is often progress. Moving from a complex model to a clearer one can strengthen both identification and interpretation. Don’t confuse methodological sophistication with methodological rigor.

Not all reviews are equally helpful. A rejection does not necessarily reflect the quality of your work, but it may signal that the contribution is not yet clear. Read between the lines.

Good reviewers are rare and valuable. When you get constructive feedback, treat it as a gift. The best reviewers see what the paper could become, not just what it is.

Your paper will change. The version that gets published is often quite different from the one you started with—and that’s a good thing. If the paper hasn’t changed, you probably haven’t learned anything from the process.

Perseverance matters more than brilliance. Soojin’s message when the paper was finally published—“Thank you for persevering through the years”—captures the reality of academic research. The difference between a published paper and an abandoned one is often not quality. It is persistence.

Find collaborators who complement you. The PLOS ONE paper, the PPI project, and now this ECRQ paper all came from the same partnership. Find people whose expertise fills in your blind spots, and invest in those relationships.

Why this one is personal

For me, this paper sits at the intersection of everything I care about: early childhood, causal inference, the experience of growing up between languages and cultures. I grew up as a dual language learner myself—Czech at school, Arabic at home, in a country that was still figuring out what it meant to be open to the world. The questions in this paper are not abstract to me.

Causal inference is often presented as a set of tools or methods. In practice, it is also a process of refinement: clarifying questions, tightening assumptions, and iterating on both analysis and interpretation. Peer review, at its best, is part of that process. It can be uneven and sometimes frustrating, but it can also play a central role in transforming a good idea into a stronger, more coherent piece of research. That is what happened here.

The full paper is available open-access here until May 31, 2026. If this work resonates with you—or if you’re navigating the peer review process yourself and want to talk—reach out on LinkedIn or at nail@hassairi.com.

Citation
  • Park, S. O., & Hassairi, N. (2026). Differential pathways of parenting support: Exploring Head Start’s stronger effects on the early literacy skills of dual language learners. Early Childhood Research Quarterly, 76, 335–345. https://doi.org/10.1016/j.ecresq.2026.03.016