What We Have—and Why It Falls Short
The federal government does collect some data on how young people spend their time, but none of its surveys or studies answer the key questions. It is worth being precise about why, because the case for developing a new instrument rests on understanding exactly what the existing ones cannot do.
The American Time Use Survey (ATUS, run by the Bureau of Labor and Statistics) is the gold standard for one-day time diaries. It has been running since 2003, samples teens 15 and older, and captures detailed records of how individuals spend their time during a single 24-hour period. It is genuinely useful for tracking national averages.
But its fundamental limitation is structural. Each respondent provides exactly one diary day. That means ATUS cannot tell you how an individual teen behaves across a typical week. It cannot tell you whether the teen who spent six hours on screens Monday was the same teen who skipped soccer practice on Tuesday. It cannot track changes in an individual teen’s time use over time, nor can it serve as a platform for any intervention. ATUS is a snapshot, not a film. For understanding national averages, that’s fine. For understanding whether a particular kid is flourishing this semester, it tells you nothing.
The Adolescent Brain Cognitive Development Study (ABCD, run by the National Institutes of Health) is the most ambitious longitudinal effort in the space. It enrolled over 10,000 children ages 9–10, uses wearables and passive smartphone sensing, and tracks behavior in serious depth. It proves that sensor-based measurement of teen behavior at scale is feasible. That matters.
But ABCD is a neuroscience study. It is brain-first, not participation-first. It was not designed to measure whether teens are converting their evenings into sports, the arts, paid work, or homework. It does not link to schools’ learning management systems or team rosters. It has no experimental engine for testing evening-time interventions. And it is not designed to produce the simple, policy-facing measures that a national assessment would provide.
The High School Longitudinal Study (HSLS:09) is NCES’s own flagship adolescent panel, tracking over 23,000 9th graders’ academic outcomes. It is excellent at capturing what happens inside schools. It is nearly silent on what happens between 3 p.m. and midnight. No wearables, no phone data, no after-school participation verification. HSLS tells us whether a student graduated or took calculus. It cannot tell us whether that student was spending five hours a day on TikTok.
Surveys like the Youth Risk Behavior Surveillance System (YRBSS, run by the Centers for Disease Control and Prevention) provide broad snapshots of teen behavior. But they rely on self-report, do not follow individuals over time, and again are not designed as experimental platforms. They are useful for surveillance. They are not useful for policy evaluation.
Monitoring for the Future is a longstanding (since 1975) longitudinal study of the “behaviors, attitudes, and values” of adolescents from 8th to 12th grade and young adults. But it relies on self-report and was designed around substance use rather than time allocation.
The University of Michigan’s Panel Study of Income Dynamics is another venerable longitudinal study (running since 1968) that includes youth time-diary data through its Child Development Supplement. But it was built to study household economics and intergenerational mobility, not afterschool hours.
Each of these efforts matters. Even taken together, they still leave a gap.
No national system tracks how teenagers spend their time week by week, with verified measurement, linked cleanly to outcomes. The one that comes closest on depth is ABCD, due to its pioneering use of sensors. The one that comes closest on education outcomes is HSLS:09. The one that comes closest on time use is ATUS. But none of them, alone or in combination, can answer the most basic question: How many verified hours is a nationally representative sample of American teenagers spending on flourishing activities versus languishing activities each week?


