All articles

Constraint-Driven Prompt Engineering Helps Lean Comms Teams Scale Without Losing Quality

Stacey Sevilla, Communications and Marketing Lead for Municipal District of Greenview, shares how her small team uses AI to keep pace with constant media releases and news updates without sacrificing quality.

CommsToday - News Team
Published
May 5, 2026
Credit : CommsToday

Blank page paralysis still exists, but now it’s in a different form. Getting results that actually work for comms professionals is all about prompt engineering.

Stacey Sevilla

Communications & Marketing Lead

Stacey Sevilla

Communications & Marketing Lead
Municipal District of Greenview

AI is making its way into the communications tech stack, but it doesn't always eliminate the dread that comes from starting with a blank page. In fact, since AI tools allow teams to scale content more easily, it also creates pressure to produce content faster. To meet the demand while still building quality output, it requires thoughtful AI handling in the form of clear constraints, careful editing, and pragmatic governance.

One sector where scaling communications with AI comes in handy is government. Stacey Sevilla runs communications and marketing for the Municipal District of Greenview, an ambitious 32,000‑square‑kilometer jurisdiction in Alberta roughly the size of Belgium. Her three‑person team oversees corporate messaging, emergency response, and crisis communications for a massive region that includes an upcoming industrial gateway and AI data center. She's seen much technological evolution over her 25-year career and drove digital strategy for national health brand Lakota long before social media went mainstream. Today, she applies that same early-adopter pragmatism to artificial intelligence.

"Blank page paralysis still exists, but now it’s in a different form. And there’s this added pressure that it should be faster, so you feel like you’re doing something wrong if it isn’t. Getting results that actually work for communications professionals is all about prompt engineering," Sevilla says. Many professionals note the initial anxiety surrounding job replacement is starting to ease. What's replacing that pressure is a feeling that they need to keep up with the technology. Sevilla views AI as a clear competitive advantage for those willing to put in the work to master it, marking a wider transition from fear to adoption across the communications industry.

  • Panic to pragmatism: "Three years ago, there were a lot of communications professionals who were very hesitant, very distrustful, and very much of the mindset that AI is going to replace us," Sevilla says. "We all need to tell each other that AI is not going to replace us as professionals." She believes communications professionals who use AI effectively will have a clear advantage.

Implementing AI successfully to scale a small team's output means building a system that speeds up communications without lowering editorial standards. To manage Greenview’s workload, Sevilla relies on platforms with generative capabilities like Canva to empower other municipal departments to create their own visual assets for programs and events. But current AI-generated imagery still lacks polish. Right now, she treats it strictly as a tool for concepting, then relies on traditional design tools for finished work.

  • The Canva caveat: AI is useful for drafting, but not yet ready to carry visual assets through to final send. "Our teams are not using the AI to generate imagery because it's just not quite there yet," she says. "You can definitely tell AI-generated graphics because they have a similar look or a similar tone, and we haven't yet trained our AI engines to that level where it can actually produce something that we would consider using."

To work with AI in a way that feels most advantageous, Sevilla suggests using constraint-based training. She gives the system explicit rules about what not to do. She compares the practice to writing a clear email to move a project forward: if the instructions are vague, results will often miss the mark.

  • Ditching the dash: "I gave ChatGPT a humongous list of strict constraints. I don't want em dashes, I don't want repeated fluff words," Sevilla says, and this often leads to impressive results. But she reminds teams that human intervention is still imperative to get the tone right. "My ChatGPT can write like me, but I very rarely accept its first iteration. I always drill it down at least 10 times for even the smallest things."

She applies the same approach to brand voice. By feeding consistent instructions about language, tone, and how Greenview’s brand should be referenced, she tunes the outputs to match organizational expectations. But perfectly mimicking an executive's voice requires inputting specific corporate data which often collides with IT security protocols. As many organizations grapple with ChatGPT data retention concerns and navigate Copilot security and management within Microsoft 365, practitioners like Sevilla put bottom‑up governance in place to keep workflows moving safely.

  • Policy over panic: "The governance side is still not caught up to the practical use side," she says. "IT sees it as huge risk, and practical users see that as excessively restrictive. Everybody's butting heads a little bit right now trying to find that balance of use where it's not too restrictive but it's still maintaining some privacy."

  • Scrubbing the prompt: In Greenview, that balance starts with simple, repeatable rules. Sevilla shares her own prompt constraints with her team and emphasizes anonymization as a baseline. For any prompt involving management or HR issues, the discipline is identical: remove names and personal details before asking the software to help frame key messages. "If I'm asking about a management or staffing issue, I would not use any names," she says. "If I put in a prompt and I want it to help me develop key messaging for a feedback session, I would not put personal details in there."

  • Playing AI therapist: She also advises against treating generative software as a search engine or confidant. Because the tool hallucinates, professionals must actively verify citations and strip personal data, maintaining a standard of discipline the general public often ignores. "We definitely don't use it like Google, but I know the vast majority of the population does use it like Google or like a therapist. I wouldn't recommend that either," Sevilla says. "You have to be careful about citations. It's prone to hallucination, and the content it spits out is very convincing."

While the technology continues to advance, the need for human judgment remains largely unchanged. This fact is influencing how many executives think about AI heading into 2026, with leaders spearheading efforts to show this moment is less of a workplace threat and more of a learning opportunity for employees. A growing number of specialized consultants even help organizations build generative AI PR playbooks, answer ethical questions, explore how the technology intersects with brand discovery and human trust, and create internal comms frameworks for AI use. "We're seeing an explosion of AI communications specialists who now sell their consulting services to help organizations effectively use AI, give them guidelines, and talk about the ethical piece," she says. "I'm seeing a new avenue for comms professionals emerging."

Tools improve, but the discipline of constraint‑based adoption separates teams that get real value out of the software from those that do not. "I do think we're going to have to change the way we do things, but I think our effective use of those tools will only make us better," she says.