Washington cities are rapidly adopting AI technology for government operations. Yet policy frameworks lag behind this surge. Public records reveal extensive use of ChatGPT by city employees in Everett and Bellingham. They use AI tools to draft emails, respond to constituents, and manage administrative tasks.
Adoption outpaces regulations across local governments. This creates growing concerns about ethics, transparency, and security. Nearly 80% of state and local government IT directors worry about unclear AI regulations, according to a nationwide survey.
Why AI Adoption Matters Now
Local governments see AI as an efficiency booster. Bellingham Mayor Kim Lund calls constituent email responses a “high-use case” for AI tools. “We consider that a permissive use of AI for efficiency reasons,” Lund said. City staff use ChatGPT to draft responses about parking, traffic, and other public concerns.
However, citizens notice the difference. Bre Garcia received an AI-generated response to her snow plowing complaint. The reply felt impersonal and dismissive. “It was like they didn’t read my email at all,” Garcia said. Records show only four human words were added to ChatGPT’s output.
Public records requests uncovered thousands of ChatGPT conversation logs. City employees use AI for complex tasks including policy research, grant applications, and comprehensive plan updates. Some officials asked ChatGPT to generate $7 million grant applications and support letters from elected leaders.
Strategic Advantages Drive Government AI Use
Cities view AI as a competitive advantage for operational efficiency. Everett Mayor Cassie Franklin supports AI adoption. “It would be silly not to,” Franklin said. “It’s a tool that can really benefit us.” Staff use AI to improve email tone, summarize meetings, and research enterprise software.
Bellingham takes a “permissive approach” to AI usage. IT Director Don Burdick encourages staff exploration while maintaining human oversight. “The industry is evolving way too fast,” Burdick said. “Keeping that sort of grip on things is not productive.”
Everett follows a more cautious path. The city restricts staff to Microsoft Copilot for security reasons. ChatGPT use requires special exemptions. “There’s a lot of safeguards in the Microsoft product versus ChatGPT,” said IT Director Chris Fadden.
Risks and Security Considerations
AI adoption raises significant security and privacy concerns. ChatGPT is not a secure platform for government data. Some chat logs required redaction because employees entered confidential information. This includes computer code for tracking homeless encampments and active police investigation details.
Washington’s AI guidance warns against entering non-public data into systems like ChatGPT. This could lead to “unauthorized disclosures, legal liabilities, and other consequences,” according to state guidelines. However, compliance with security guidelines remains inconsistent across cities.
Accuracy represents another major challenge. Chatbots frequently make up information or introduce errors. Records show ChatGPT introducing mistakes into official documents. Most errors were corrected through human review, but the risk remains.
Federal Agencies Push Back on AI Use
Some federal agencies are restricting AI use in critical applications. The National Institutes of Health announced it won’t consider grant applications “substantially developed by AI.” The agency cited concerns about fairness and originality in the review process.
This creates uncertainty for cities using AI for grant applications. Bellingham staff used ChatGPT to help request state funding for cyclist and pedestrian safety projects. Everett employees generated support letters and racial equity narratives for housing grant applications.
What Business Leaders Should Know
Washington’s AI Task Force leads policy development efforts. Established in 2024, the task force includes elected officials, business stakeholders, and advocacy representatives. They must deliver recommendations on AI use in private and public sectors.
The task force made one official recommendation so far. They want lawmakers to strengthen language against AI-generated child sexual abuse material. Preliminary reports are due in September, with final recommendations by July 2026.
“Technology moves very fast, law and regulation tends to move slowly,” said Yuki Ishizuka, a policy analyst on the task force. Cities need clear guidelines as AI becomes more embedded in government functions.
Transparency remains a key challenge. State guidance requires labeling AI-generated government content. However, none of the reviewed records included such disclosures. Cities are still debating whether to require AI use citations.
Both Bellingham and Everett expect to finalize AI policies before year-end. These frameworks will likely influence other Washington cities as AI adoption accelerates across local government operations.