March 14, 2026 (Long read: approx. 19 minutes of reading)
New York Senate Bill S7263, introduced by Senator Kristen Gonzalez in April 2025, would add a new § 390-f to the General Business Law, imposing civil liability on chatbot operators that allow their systems to provide substantive professional advice in regulated fields. The bill passed the Senate Internet and Technology Committee 6–0 on February 25, 2026, and is now positioned for a full Senate floor vote. A companion bill, A6545, already exists in the Assembly.
The full text of Senate Bill S7263 is available directly from the New York State Senate at https://www.nysenate.gov/legislation/bills/2025/S7263, and the bill PDF is available at https://legislation.nysenate.gov/pdf/bills/2025/S7263.
The legislation’s goal is legitimate. Chatbots that impersonate licensed professionals and dispense harmful medical, legal, or mental health guidance pose a real risk to New Yorkers, and the State has a compelling interest in protecting consumers. But the bill as drafted contains at least four serious technical deficiencies that, if left unaddressed, could expose the wrong parties to liability, create irrational regulatory asymmetries, confuse courts trying to apply its provisions, and — perhaps most counterproductively — suppress a class of information access that actually helps ordinary people engage more meaningfully with licensed professionals.
Below, we identify each deficiency and propose a targeted fix.

What the Bill Actually Does
Senate Bill S7263 amends the General Business Law by adding § 390-f, which does three things:
- Prohibits a chatbot “proprietor” from permitting the chatbot to provide any substantive response, information, or advice that, if given by a natural person, would constitute a crime under Education Law §§ 6512–6513 (unauthorized practice of a licensed profession) or would violate the unauthorized-practice-of-law provisions of Judiciary Law Article 15.
- Provides that a proprietor cannot disclaim this liability merely by notifying users that they are interacting with a non-human system.
- Separately requires that proprietors provide “clear, conspicuous and explicit notice” to users that they are interacting with an AI chatbot, displayed in the same language and at least as large a font as any other text on the site.
The Education Law provisions referenced by the bill sweep across thirteen licensed professions governed by Articles 131 (Medicine), 133 (Dentistry), 135 (Veterinary Medicine), 136 (Physical Therapy), 137 (Pharmacy), 139 (Nursing), 141 (Podiatry), 143 (Optometry), 145 (Engineering, Land Surveying and Geology), 147 (Architecture), 153 (Psychology), 154 (Social Work), and 163 (Mental Health Counseling).
Any person injured by a violation may bring a civil action for actual damages. Willful violations trigger actual damages plus attorney’s fees and costs — a fee-shifting provision that, as several commentators have noted, could invite serial litigation similar to the wave of ADA web-accessibility lawsuits that produced more than 5,000 filings in 2025 alone.
Problem No. 1: The Definition of “Proprietor” Is Broad Enough to Capture Ordinary Users
Section 390-f(1)(c ) defines “proprietor” as:
“any person, business, company, organization, institution or government entity that owns, operates or deploys a chatbot system used to interact with users. Proprietors shall not include third-party developers that license their chatbot technology to a proprietor.”
The only carve-out is for third-party developers that license underlying technology to a deploying proprietor — in other words, foundational model providers like Anthropic or OpenAI. Everyone else who “owns, operates, or deploys” falls within scope.
The word “operates” is the problem. In ordinary usage, a person who opens a ChatGPT or Claude session and types a question is, in a non-technical sense, operating the system for the duration of that interaction. The bill does not define “operates,” and its failure to do so creates three distinct interpretive risks:
- Enterprise API customers. A law firm, hospital, or financial institution that calls a model provider’s API and wraps it in a custom interface — even a simple one — is almost certainly a proprietor under this definition. That is probably intentional. But the bill gives those parties no guidance on what engineering steps, contractual safeguards, or disclosure mechanisms might limit their exposure, other than mandatory notice that the bill says is insufficient.
- Custom GPT and Workspace builders. Individual professionals and small businesses who build “custom GPT” configurations or Claude Projects and share access with colleagues or clients would appear to “deploy” a chatbot within the meaning of the bill, even if they lack the legal or technical sophistication to evaluate their obligations.
- Sophisticated individual users. The definition’s reference to “any person” who “operates” a chatbot, read literally, could extend to individual subscribers who use a general-purpose chatbot for research assistance and then share their session output with others — a daily occurrence in every professional office in New York.
The exclusion of “third-party developers that license their chatbot technology to a proprietor” does not help here. That language is meant to protect upstream model providers, not downstream users. End users do not license chatbot technology to anyone; they purchase access from a provider. They are neither the upstream licensor nor the downstream deployer in the sense the bill contemplates — but the statutory text does not clearly exclude them.
Recommended fix: Add an express exclusion for individuals who access a chatbot solely for personal use or research and who do not further distribute, re-deploy, or commercially offer the chatbot’s outputs to third parties. The definition could track the “service provider” approach familiar from the Digital Millennium Copyright Act — limiting proprietor status to those who actively make a chatbot system available to the public for use by others.
Problem No. 2: The Bill Protects Against Accounting Advice But Not Against Engineering or Architectural Advice — A Glaring Asymmetry
Senate Bill S7263 prohibits chatbot advice across thirteen Education Law articles but conspicuously omits Article 149, which governs public accountancy and the certified public accountant (CPA) license.
To be precise about what this means: under the bill as drafted, a chatbot could lawfully advise a New Yorker on the proper depreciation method for a capital asset, whether a transaction qualifies for like-kind exchange treatment under Section 1031 of the Internal Revenue Code, or how to recognize revenue under generally accepted accounting principles — all without triggering § 390-f liability. That same chatbot could not lawfully help the same user understand whether a proposed building renovation requires a licensed architect under the State Building Code, or whether a pressure vessel design complies with ASME standards.
This inversion is hard to justify on consumer-protection grounds. Tax and accounting errors can cause devastating financial harm. IRS audits, underpaid taxes, improper financial restatements, and fraudulent conveyances facilitated by accounting misjudgments produce real, quantifiable injury. Indeed, the Education Law’s protection of the CPA title reflects the Legislature’s considered judgment that public accountancy is sufficiently hazardous to require licensure. It is not obvious why a chatbot’s exposure to liability for impersonating a physical therapist (Article 136) or an optometrist (Article 143) is more pressing than its exposure for impersonating a CPA.
The most likely explanation is inadvertence — a drafting gap rather than a principled distinction. But inadvertence in legislation is not self-correcting. If S7263 becomes law with Article 149 absent, the omission will be read as deliberate, and courts will enforce it accordingly. Chatbot operators responding to the liability risk will block or hedge responses in engineering and architecture but not in tax and accounting, producing precisely the asymmetric access to AI assistance that a coherent consumer-protection statute should avoid.
Recommended fix: Add Article 149 to the list of covered professions. If the Legislature has a policy reason to treat public accountancy differently — for instance, because the IRS separately regulates certain tax advice through Circular 230, or because tax analysis is considered a form of protected speech — that carve-out should be made explicit and justified, not left as a silent gap.
Problem No. 3: Section 390-f(2)(b) Contradicts Section 390-f(4) — A Mandatory Futility Loop
Read in isolation, § 390-f(4) and § 390-f(2)(b) each make sense. Together, they create a structural anomaly:
Section 390-f(2)(b): “A proprietor may not waive or disclaim this liability merely by notifying consumers that they are interacting with a non-human chatbot system.” Section 390-f(4): “Proprietors utilizing chatbots shall provide clear, conspicuous and explicit notice to users that they are interacting with an artificial intelligence chatbot program.”
The bill thus simultaneously commands proprietors to give users notice that they are interacting with an AI system and commands that the act of giving that notice does not reduce their legal exposure. Proprietors are required to perform a disclosure that the statute itself declares insufficient as a defense.
This is not technically a logical contradiction — it is possible to mandate a disclosure while also holding that the disclosure alone does not defeat liability. But it is a serious policy and drafting problem for three reasons:
- It provides no compliance pathway. The bill tells proprietors what will not protect them (notice alone), but nowhere identifies what affirmative steps would limit their exposure. A statute that requires compliance but provides no path to compliance effectively creates strict liability — which may or may not be the Legislature’s intent, but which the bill does not say explicitly.
- It creates perverse incentives. If the mandatory disclosure affords no protection, sophisticated operators will respond by suppressing entire categories of AI output rather than investing in accuracy or user guidance. The result is that users lose access to AI assistance not because of demonstrated harm but because operators cannot afford the residual liability that no disclosure can eliminate.
- It renders the disclosure requirement incoherent in retrospect. If the purpose of § 390-f(4) were to inform users so that they could make more careful decisions about relying on AI output, that purpose is undermined by § 390-f(2)(b)’s insistence that the disclosure makes no legal difference. Either the Legislature believes that disclosure changes the user’s risk calculus (in which case it should matter legally), or it does not (in which case § 390-f(4) is a bureaucratic formality disconnected from the bill’s consumer-protection rationale).
Recommended fix: Add a tiered liability provision that distinguishes between operators who comply with § 390-f(4)’s disclosure requirement and those who do not. Disclosure-compliant operators should receive at least a partial affirmative defense — or, at minimum, a reduction in damages or a heightened standard for willfulness findings. The Legislature should also specify, separately, what additional safeguards (content filters, professional referral prompts, accuracy auditing) might qualify an operator for a safe harbor. Without that structure, §§ 390-f(2)(b) and 390-f(4) work at cross-purposes.
Problem No. 4: The Bill Lacks an Informed-User Safe Harbor — and in Doing So, Harms the People It Aims to Protect
The bill’s fundamental premise is that AI chatbot advice is dangerous and that the remedy is to discourage chatbots from providing it. We believe this analysis is incomplete, and that the bill as drafted would produce concrete harm to the very New Yorkers it is designed to protect.
Consider who actually uses AI chatbots for professional questions. It is generally not the client with a trusted attorney, a personal physician, or an accountant on retainer. It is the tenant who cannot afford a lawyer to interpret an eviction notice. It is the small business owner who does not know whether a state construction permit requires an architect. It is the uninsured patient who wants to understand a diagnosis before a specialist appointment that is three weeks away. For those users, the choice the bill creates is not between AI advice and licensed professional advice. It is between AI advice and no advice at all.
Suppressing AI guidance for the informed consumer does not protect that consumer — it simply leaves them less informed. It also disadvantages the licensed professionals those consumers eventually consult. An attorney, doctor, or engineer who can begin an engagement with a client who already understands the basic framework of the relevant issues is a more effective professional. The client who has researched their matter — even imperfectly — asks better questions, provides more relevant facts, and can evaluate professional advice more critically.
The appropriate legislative response to the risk of harmful AI professional advice is not suppression but calibrated disclosure — a requirement that chatbots surface clear, affirmative guidance to users before delivering substantive information in a regulated professional domain. Rather than prohibiting the chatbot from answering, the statute should require the chatbot to answer with context. That context could take the form of a mandatory pre-response disclosure, appearing before any substantive professional information is delivered, that:
- Clearly identifies the information that follows as AI-generated and not the advice of a licensed professional;
- Explains, in plain language relevant to the specific professional domain, that the information may be inaccurate or incomplete;
- Encourages the user to consult a licensed professional before acting on any information received; and
- Provides, where feasible, publicly available resources for finding licensed professionals in the relevant field.
This approach — a mandatory, domain-specific, pre-response safe-harbor disclosure — would accomplish more for consumer protection than an outright prohibition. It informs users of the risk at the moment of maximum relevance (before they receive and potentially rely on the information, not after), while preserving their ability to access general guidance that may help them make better use of professional time and resources.
Critically, operators who comply with such a disclosure requirement should receive a liability safe harbor — at least for non-willful violations. This would align the incentive structure with the Legislature’s actual goal: more informed AI users, not less informed ones.
Recommended fix: Add a new subdivision to § 390-f providing that a proprietor who, prior to any substantive response in a covered professional domain, displays a clear and prominent disclosure meeting statutory minimum requirements shall not be liable for actual damages, and shall be liable for willful violations only where the chatbot expressly represents that it is a licensed professional or actively conceals the AI nature of its response. Pair this with a rulemaking delegation authorizing the Office of the Attorney General to adopt domain-specific disclosure standards for medicine, law, engineering, and other covered fields.
Appendix: Proposed Statutory Amendment Language
The following draft amendments to proposed § 390-f are offered as a starting point for legislative revision. New matter is shown in underscored text consistent with New York bill-drafting conventions; matter to be omitted is shown in brackets. These proposals are intended to be illustrative and should be reviewed by legislative counsel before introduction.
Amendment No. 1 — Narrowing the Definition of “Proprietor”
Proposed revision to § 390-f(1)(c ):
(c ) “Proprietor” shall mean any person, business, company, organization, institution or government entity that [owns, operates or deploys a chatbot system used to interact with users] makes a chatbot system available to the public, or to a defined class of users, for the purpose of providing information or services through such system. Proprietors shall not include (i) third-party developers that license their chatbot technology to a proprietor[.]; or (ii) any individual who accesses a chatbot solely for personal, non-commercial use and does not further offer, distribute, re-deploy, or commercially exploit the outputs of such chatbot to or for the benefit of third parties.
Drafting note: The phrase “makes a chatbot system available to the public” is borrowed from the safe-harbor framework of 17 U.S.C. § 512(c ) and similar provider-liability statutes, and limits proprietor status to parties who actively host or deploy a system for others’ use. The clause (ii) exclusion protects individual end-users while leaving enterprise API customers, chatbot-as-a-service platforms, and institutional deployers within scope.
Amendment No. 2 — Adding Public Accountancy (Article 149) to the Covered Professions
Proposed revision to § 390-f(2)(a)(i) — adding Article 149 to the enumerated list:
(i) would constitute a crime under section sixty-five hundred twelve or sixty-five hundred thirteen of the education law in relation to the professions whose licensure is governed under articles one hundred thirty-one, one hundred thirty-three, one hundred thirty-five, one hundred thirty-six, one hundred thirty-seven, one hundred thirty-nine, one hundred forty-one, one hundred forty-three, one hundred forty-five, one hundred forty-seven, one hundred forty-nine, one hundred fifty-three, one hundred fifty-four, and one hundred sixty-three of the education law; or
Drafting note: The only change is the insertion of “one hundred forty-nine” between “one hundred forty-seven” (Architecture) and “one hundred fifty-three” (Psychology), adding the public accountancy profession governed by Article 149. If the Legislature determines that federal regulatory preemption through IRS Circular 230 or another mechanism counsels a different approach for tax and accounting advice specifically, that carve-out should be added as an express proviso to this subdivision rather than addressed by omission.
Amendment No. 3 — Resolving the Contradiction Between §§ 390-f(2)(b) and 390-f(4)
Proposed revision to § 390-f(2)(b) — replacing the blanket non-waiver with a tiered standard:
(b) [A proprietor may not waive or disclaim this liability merely by notifying consumers that they are interacting with a non-human chatbot system.] Compliance with the disclosure requirements set forth in subdivision four of this section shall not, standing alone, constitute a complete defense to an action brought under this section; provided, however, that in any action for actual damages, a court shall consider a proprietor’s good-faith compliance with subdivision four and with any rules promulgated thereunder as a factor in determining the amount of damages, and no proprietor that has complied with subdivision four shall be subject to fee-shifting under subdivision three unless the violation is found to be willful. A finding of willfulness shall not be made solely on the basis of the content of a chatbot’s substantive response; it shall require a finding that the proprietor took affirmative steps to cause the chatbot to represent itself as a licensed professional, to conceal the AI-generated nature of its response, or to circumvent controls designed to comply with this section.
Drafting note: This revision preserves the Legislature’s intent that a bare disclosure disclaimer is not a complete defense, while creating a coherent compliance pathway: operators that follow the § 390-f(4) disclosure requirement receive a partial shield (no fee-shifting without willfulness, and damages subject to judicial discretion). The willfulness definition prevents strike-suit litigation based solely on a chatbot’s substantive output.
Amendment No. 4 — Adding an Informed-User Safe Harbor with Mandatory Pre-Response Disclosure
Proposed new subdivision 5, to be added at the end of § 390-f:
5. Safe harbor for domain-specific pre-response disclosure. (a) A proprietor shall not be liable for actual or statutory damages under this section, and shall not be subject to fee-shifting under subdivision three of this section, if, prior to delivering any substantive response that would otherwise be prohibited under subdivision two of this section, the chatbot displays to the user a domain-specific pre-response disclosure that meets the minimum standards set forth in paragraph (b) of this subdivision or in rules promulgated by the attorney general pursuant to paragraph (c ) of this subdivision. (b) A pre-response disclosure shall at minimum: (i) state, in plain language and in a font no smaller than the largest text displayed elsewhere on the same interface, that the information to follow is generated by an artificial intelligence system and does not constitute the advice of a licensed professional; (ii) identify, in general terms, the licensed profession or professions whose regulated subject matter is addressed by the forthcoming response; (iii) advise the user to consult a licensed professional before acting on the information; and (iv) where reasonably practicable, provide a reference to at least one publicly available resource for locating licensed professionals in the relevant field, such as the New York State Office of the Professions licensee search or the attorney search maintained by the Office of Court Administration. (c ) The attorney general is authorized to promulgate rules establishing additional or alternative standards for pre-response disclosures applicable to specific professional domains enumerated in subdivision two of this section. Such rules may prescribe the form, content, timing, and manner of disclosure and may vary by professional domain. (d) The safe harbor established by this subdivision shall not apply where: (i) the chatbot expressly represents that it is, or is operated by, a licensed professional in the relevant field; (ii) the chatbot provides a fabricated license number, credential, or other false indicia of professional licensure; or (iii) the proprietor has taken affirmative steps to suppress or circumvent the pre-response disclosure required by this subdivision.
Drafting note: Subdivision 5 establishes the safe harbor as a complete defense to actual damages and fee-shifting (though not as a defense to injunctive relief or nominal damages), conditioned on the proprietor’s affirmative delivery of a pre-response disclosure before substantive professional content is shown. The carve-outs in paragraph (d) preserve full liability for the specific misrepresentation conduct that the bill’s sponsor has cited as the core target of the legislation: chatbots that impersonate licensed professionals, fabricate credentials, or actively deceive users.
Conclusion: A More Effective Path Forward
Burrell Law, P.C. supports the Legislature’s effort to establish a legal framework for AI chatbot liability in New York. The state’s residents deserve protection from AI systems that impersonate licensed professionals, fabricate credentials, and dispense dangerous or fraudulent guidance. But the bill’s current draft achieves less than it promises and creates significant unintended risks along the way.
The four deficiencies identified above — the overbroad proprietor definition, the Article 149 omission, the structural tension between §§ 390-f(2)(b) and 390-f(4), and the absence of a meaningful informed-user safe harbor — are all fixable within the existing legislative framework. None requires a wholesale redraft. Each can be addressed with targeted amendments that would make S7263 a more precise, more enforceable, and more equitable statute.
We encourage the Legislature to take these concerns seriously before the bill advances to a floor vote, and we welcome the opportunity to assist interested stakeholders, industry participants, or legislators in developing proposed amendments.
About Burrell Law, P.C.
Burrell Law, P.C. is a business and tax law firm with offices in New York City and Washington, D.C., advising clients on corporate law, securities, cryptocurrency regulation, emerging technology and tax. This post is for informational purposes only and does not constitute legal advice. Readers with specific questions should consult qualified legal counsel. For more information, please visit https://burrell-law.com or email info@burrell-law.com.
Primary Sources Cited
New York Senate Bill S7263 (2025–2026 Session): https://www.nysenate.gov/legislation/bills/2025/S7263
Full bill text (PDF): https://legislation.nysenate.gov/pdf/bills/2025/S7263
New York Senate Committee Press Release (Feb. 25, 2026): https://www.nysenate.gov/newsroom/press-releases/2026/kristen-gonzalez/ai-chatbot-ban-minors-passes-internet-technology
New York Education Law Title 8 — The Professions (licensed profession article index): https://www.op.nysed.gov/title8/education-law
Education Law Article 149 — Public Accountancy: https://www.op.nysed.gov/professions/certified-public-accountants/laws-rules-regulations/article-149