How Bad Actors Could Exploit This (And Why It Matters)

“But no one’s actually misused it, right?”

That was the gist of the response we received after raising concerns about the hospital waitlist text message campaign. The implication? No harm, no foul.

But that logic misses the point — and the risk.

Because when a real message behaves like a scam, you don’t need to imagine abuse.
You’ve already written the playbook.


Bad actor sending phishing text messages

Bad actors send texts all the time.


Let’s imagine you’re a scammer. You want to phish personal info from thousands of New Zealanders with minimal effort.

You’d look for:

  • A trusted institution people won’t question
  • A real event or campaign to piggyback on
  • A communication method that already bypasses user skepticism

Now enter:
A national health agency texting people from generic mobile numbers…
…asking for personal info…
…with no sender ID…
…and no official verification path.

Congratulations. You don’t need to fake credibility. It’s already done for you.


  • Step 1: Buy a prepaid NZ SIM
  • Step 2: Copy the real message format

    “Kia ora, we’re checking if you’re still waiting for surgery. Please confirm: name, DOB, address, email. Reply YES if still waiting.”

  • Step 3: Target numbers harvested from data leaks or directories
  • Step 4: Set up a fake callback line or spoofed email

Because the real messages had no digital signature, no verified short code, and no link to an official domain, there’s nothing to distinguish the fake from the real.

That’s the risk — not theoretical, but baked in.


Even if no one abuses this campaign directly, it still causes harm by:

  • Training people to respond to vague, unauthenticated messages
  • Making it harder to tell scams from real outreach
  • Undermining future scam education efforts (CERT’s checklist fails this message)

We tell people not to reply to unknown numbers.
We tell people not to send personal info over text.
Then a public health agency does exactly that — and calls it progress.


Te Whatu Ora may have meant well. But good intent doesn’t prevent bad outcomes.

This wasn’t just a communication misstep.
It was a real-world example of what happens when privacy, security, and trust aren’t built into the rollout plan.

And unfortunately, that gap is exactly what bad actors look for.


The scary part is how easy it would have been to avoid this:

  • ✅ Use verified sender IDs or shortcodes
  • ✅ Link to a public-facing campaign page
  • ✅ Send advance notice through secure channels (portal, letter, email)
  • ✅ Ask for confirmation only, not personal data
  • ✅ Give people a clear path to opt out or call back securely

That’s not a wishlist. That’s basic hygiene for public digital communication.


If the public can’t tell the difference between a real message and a scam…
…then the system is broken — even if no one exploits it.

Good design prevents confusion.
Bad design doesn’t need help to cause harm.

This series wasn’t about panic. It was about showing how even well-meaning systems can erode trust when privacy isn’t part of the blueprint.

We’re done normalising insecure defaults.


Want to keep learning how to protect your data — without the overwhelm?
Join the 7-Day Privacy Bootcamp or follow us on Facebook.