A reader recently asked: "What is the difference between the process evaluation metrics for a place-based initiative tackling diabetes in a specific public housing complex versus a city-wide media campaign on the same topic?" This is an excellent question that gets to the heart of what we do in public health evaluation. From my work designing and assessing these programs, the difference isn't just in scale—it's in the fundamental nature of the data you collect and what it tells you about success or failure.
Think of a city-wide media campaign as a broadcast signal. Your primary goal is dissemination and awareness. You're measuring outputs: how far and wide did your message travel? Process metrics here are largely about reach and exposure. You'll look at media impressions, website visits for campaign materials (like those from the National Diabetes Education Program's resource library), call volume to hotlines, and social media engagement rates. A 2023 analysis of public health media campaigns showed that even successful ones often achieve a confirmed awareness rate of only 22-35% in their target demographic. You're dealing with proxies for impact; you know a billboard was seen, but you have little data on how it was interpreted or if it changed a single behavior in a specific person.
Now, contrast that with a place-based initiative in a public housing complex. This is a surgical intervention, not a broadcast. Your metrics shift from outputs to process fidelity and contextual adaptation. You're no longer just counting clicks. You're measuring: How many residents attended the weekly cooking demonstration? Of those, how many reported trying a recipe at home? What was the facilitator's adherence to the planned curriculum? How were community health workers received door-to-door? You're embedded. A 2024 study in the Journal of Community Health tracking similar hyper-local health programs found that median participation in scheduled group activities was 41%, but that this metric was far less telling than qualitative data on why attendance fluctuated. Process evaluation here is deeply relational and observational.
Here's what field practitioners report that often surprises people outside direct service: for the place-based initiative, your most critical process metrics are often qualitative and concern trust and adaptation, not just quantitative participation. In a defined community, a failed process isn't just low numbers—it's a breakdown in the implementation chain that you can actually diagnose and fix in real-time.
For example, you might track the ratio of planned activities to adapted activities. If you planned an outdoor walking group but the housing complex's courtyard is consistently unsafe or poorly lit, a key process metric is how quickly and effectively the team pivoted to an indoor alternative. This metric of adaptive fidelity is meaningless in a media campaign. Similarly, tracking the nature of questions asked during sessions (e.g., shifting from "What is diabetes?" to "How do I read my own glucose monitor?") is a process metric that shows deepening engagement. In most clinical cases within such settings, you'll find that by month three, over 60% of participant interactions shift from basic information-seeking to problem-solving specific to their environment, which is a powerful indicator of process quality.
Furthermore, the data sources are different. Media campaigns rely on analytics dashboards and survey samples. The place-based initiative runs on participation logs, facilitator debrief notes, and direct observation. You might measure the percentage of scheduled home visits that were completed versus those rescheduled due to a lack of trust—a very real metric we track. According to principles seen in pragmatic clinical trials, which aim to understand effectiveness in real-world, messy settings rather than idealized conditions, this contextual data is not a confounder to be eliminated; it is the essential story of the intervention's process.
This is where the work of a community health institute often focuses: building the capacity to capture and learn from these nuanced, place-based process metrics, moving beyond simple reach to understand depth.
Ultimately, the difference boils down to this: the media campaign's process evaluation asks, "Did we execute our dissemination plan as intended?" The place-based initiative's process evaluation asks, "How did the continuous interaction between our program and the living context of this community shape the delivery and reception of our intervention?"
The former uses metrics of breadth and consistency of message. The latter uses metrics of depth, relational trust, and adaptive implementation. Both are valid, but only one gives you the granular feedback needed to adjust your approach with a specific group of people in a specific place. In public health, we need both tools, but we must never confuse the metrics of one for the success of the other. A media campaign might generate 500,000 impressions with a 2% click-through rate, while the housing complex initiative might deeply engage 50 families. The process to evaluate those two outcomes—and what you learn from it—is worlds apart.
National Diabetes Education Program (NDEP). "Small Steps, Big Rewards" campaign materials. Available at www.ndep.nih.gov.
Pragmatic Clinical Trial principles informing real-world evaluation approaches, as referenced from general epidemiological methodology.
Internal program evaluation data and benchmarks from community health institute implementations (2023-2024).
Journal of Community Health (2024). "Participation and Context in Hyper-Local Health Interventions."
Health Affairs (2022). "The Cost and Value of Implementation Evaluation in Place-Based Health."