Your AI tool just generated new information about your client. Do you know what IPP3A requires?

BY RICHARD - 13 May 2026

On 1 May 2026, a significant change to New Zealand’s Privacy Act 2020 took effect. The new information privacy principle 3A (IPP3A) requires agencies that collect personal information about an individual “other than from the individual concerned” to take reasonable steps to inform that individual about the collection.

If your organisation uses AI tools to process personal information, this change could affect you. And it could do so in two quite different scenarios.

What changed

Until now, the Privacy Act’s disclosure principle (IPP3) focused on direct collection of personal information from individuals. For example, when you ask someone to fill in a form or hand over their details. There was no requirement, when collecting personal information about individuals from third parties, to tell individuals you were doing so. IPP3A fills the gap. Now, if your agency collects personal information about someone from a source other than the individual themself, it has transparency obligations that largely mirror those under IPP3. In the health sector, the same applies under the new health information privacy rule 3A (HIPR3A), which is largely the same as IPP3A.

Note the wording of IPP3A. It does not refer to collecting personal information “from another person, agency or organisation”. It refers to collecting personal information about an individual “other than from the individual concerned”. That’s broader than it might first appear, and it may matter for how the principle applies to AI.

Scenario 1: AI that generates new personal information about someone at your request

The first situation is where you use a third party AI tool that doesn’t just process existing personal information in prompts, but generates new personal information about an individual that you didn’t have before.

Consider a doctor or psychologist who uses an AI scribe tool during client sessions. The tool transcribes and summarises the session, which is straightforward enough. But some of these tools can go further. For example, they might be able to produce a differential diagnosis, suggest a treatment plan, or flag risk factors based on what was discussed. That output is new health information about the client, and when you ask the AI tool to generate it, you’re collecting it from the AI tool, not from the client.

Or consider a radiologist who uses a third party AI tool to provide a second opinion on imaging results. The AI analyses a scan and produces its own assessment, identifying something the radiologist hadn’t spotted. That assessment is new health information about the patient, generated by the AI.

Because IPP3A/HIPR3A covers collection “other than from the individual concerned”, these situations fall within its scope, and that can be the case regardless of whether the information is opinionative, derived, or inferred. The practitioner is collecting new personal information about the individual from a source other than the individual. Unless an exception to the disclosure requirements applies, HIPR3A requires the practitioner to inform the individual about that collection.

This is easy to overlook. Practitioners might treat AI-generated outputs as part of their own professional assessment rather than as information collected from an external source. But under HIPR3A’s wording, the source of the information matters, and the source here is the AI tool.

There’s an interesting practical consequence. In some situations a practitioner might reasonably conclude that when they collect information directly from a client and then input it into an AI tool, they don’t need to mention the AI provider as a “recipient” of the information under IPP3 (i.e., when section 11 of the Privacy Act applies to the effect that there is no ‘disclosure’ to the AI tool provider because the AI tool provider will not use or disclose the information for its own purposes). But IPP3A creates a separate obligation. If the practitioner is using the AI tool to generate additional personal information about the client, IPP3A will require transparency about that use (unless an exception applies), regardless of the IPP3 position. In practice, this means the practitioner might as well, in my view should, be upfront about use of the AI tool from the outset.

Scenario 2: Agentic AI that gathers personal information from external sources

The second situation is where AI goes out and collects personal information from third-party sources on your behalf.

This is likely to be increasingly common with agentic AI workflows: automated systems that search websites, scrape social media profiles, query databases, or pull information from registers. A recruitment tool that gathers candidate information from LinkedIn profiles is a clear example.

In cases like this, the AI system is collecting – for you – personal information about individuals who never provided it to your organisation. Unless an exception applies, that’s squarely within IPP3A. Separate questions may arise under IPP1 as to whether it’s reasonably necessary to collect the information in the first place and under IPP2 as to whether it’s OK to collect the information from another source, but for now our focus is on IPP3A.

Now, there is an exception in IPP3A that says an agency doesn’t need to make the disclosures if it reasonably believes the information being collected is publicly available information (which is also one of the exceptions in IPP2 that enables agencies to collect information from another source). That may well apply in some situations of the kind described above. But not necessarily always, especially when agentic workflows are set up in a way that enables an AI tool to access, through your web browser or other means, online services to which you’re already logged in or to use your login to get in there. In this kind of scenario, the AI tool – your agentic workflow – could be accessing information that is not ‘publicly available information’ as that term is understood under the Privacy Act. (You might also be breaching IPP4 and/or the service provider’s terms of use, but they are separate questions.)

What to do now

If your organisation uses AI tools to process personal information, at least two IPP3A/HIPR3A-related steps are worth taking now:

  1. Identify whether your organisation is using AI tools to generate new personal information about individuals (Scenario 1) or is using AI tools or agentic workflows to collect personal information from external sources (Scenario 2).
  2. If your organisation is doing either of these things, consider whether any IPP3A/HIPR3A exceptions apply. If they don’t, or if a code of ethics or regulatory guidance in your sector requires or strongly encourages disclosure, update your privacy notices, consent forms, or other transparency mechanisms. If you use AI tools to generate new information about individuals, or to collect information about them from third-party sources, your existing notices are unlikely to cover this.

 

If you need a hand, or would like to discuss other legal implications of processing personal or health information with AI tools, feel free to get in touch.

Update (13.05.26)

In the health sector, HIPR2 (Source of health information) is stricter than its non-health equivalent, IPP2. Under IPP2, an agency doesn’t need to collect personal information directly from the individuals concerned if it believes on reasonable grounds that, among other things, non-compliance would not prejudice the interests of the individual concerned. This ground does not exist in HIPR2. There are other grounds, but that one cannot be relied on. In the health-related AI scenarios we’re looking at, there might be no exception justifying collection from another source other than consent. Where that is the case, informed consent may be the only means by which new health information can legimately be collected from a third party AI tool.

Interestingly, the Medical Council’s March 2026 ‘Guidance on using artificial intelligence (AI) in patient care’ contains this section on informed consent:

“9. There are some specific situations where you need to obtain informed consent for the use of AI, including when:

a. using an AI tool to record the consultation, such as a transcription tool (scribe)

b. the patient’s personal details are shared outside of the primary medical record or used for AI training in a way that could identify them

c. the AI technology plays a significant role in diagnosis, treatment or delivery of care.”

For present purposes, I’m focusing on paragraph 9(c).

The Medical Council guidance here will likely stem, at least in part, from both general provisions in the Code of Health and Disability Services Consumers’ Rights and, in cases where new health information is being collected, HIPR2. In cases where an AI tool is being used in any way described in paragraph 9(c) that involves the collection of new health information from the AI tool, not only must patients be informed about that collection unless an exception applies (HIPR3A), but informed consent must be obtained before the collection occurs.

If informed consent ‘must be obtained’ (as per the Medical Council) before the collection occurs, that supercedes a ‘mere’ HIPR3A analysis. The information listed in HIPR3A must be provided unless an exception applies, but we’re now in territory of more than mere disclosure. We’re in the land of consent.

The practical takeaway here, then, is that in health settings there may be situations where the only legally defensible way to collect new health information from an AI tool is to do so with the patient or client’s consent.

Sign up to our newsletter

Sign up to receive new blog posts and other updates in your inbox and be the first to know

Your personal information be handled in accordance with our privacy statement.

You may be interested in