It is increasingly found that teams building AI products expect novelty objections, but the first roadblock usually arrives under Section 3(k), that the claims are a “mathematical method” or a “computer programme per se”. The problem is sharper for ML pipelines because the inventive spark often lies in model training, feature engineering, or inference logic that reads like algorithms on paper. Recent Delhi High Court decisions have, however, clarified that software implementation is not fatal where a technical effect or technical contribution is shown, and the Patent Office’s CRI Guidelines remain the touchstone for framing and examining such inventions.
The legal landscape for AI/ML patents in India
Section 3(k) excludes mathematical methods, business methods, computer programmes per se, and algorithms. Examiners apply the Revised Guidelines for Examination of Computer Related Inventions, 2017, which instruct a substance-over-form approach, asking whether the invention produces a technical effect or technical contribution, not merely an automation of mental steps.
Courts have reinforced this framework. In Ferid Allani v. Union of India (Delhi High Court, 2019), the Court held that a computer-implemented invention is not excluded if it demonstrates a technical effect or contribution. The case was remanded and later resulted in grant by the IPAB, anchoring the “technical effect” pathway that applicants now routinely invoke.
In Microsoft Technology Licensing v. Assistant Controller (Delhi High Court, 2023 and 2024 orders referenced in later appeals), the Court reiterated that the Controller must examine the substance for technical effect, rather than reject merely because claims mention software or instructions. Commentators have read these rulings as tightening the expectation that applicants evidence the technical contribution in the specification.
Fintech decisions add colour. In Comviva Technologies v. Assistant Controller (Delhi High Court, 12 November 2024), the refusal under Section 3(k) was set aside because the invention delivered a technical solution to a protocol-level problem in electronic payments, not a business rule tweak. Although payments are not “AI” per se, the ratio is directly relevant to ML-driven authentication and fraud-control systems, where the contribution must be framed as a system-level improvement.
The Court has also, in 2025 commentary around BlackBerry matters, clarified that “technical contribution” must be real and evidenced, not asserted. Practitioners now emphasise mapping claim features to measurable computing outcomes.
Where AI/ML applications usually stumble
The “mathematical method” label
When claims speak in model-centric terms, for example loss functions, gradient rules, or matrix operations, the contribution can be characterised as a mathematical method. The 2017 CRI Guidelines caution that mere expression of calculations or equations is excluded, whereas an implementation that changes machine behaviour can be patentable. A draft 2025 update, circulated by the IPO, continues to illustrate technical effect and contribution with examples linked to case law.
“Computer programme per se” without technical effect
A pipeline that “receives data, trains a model, generates a score” on standard hardware is vulnerable. Examiners look for a concrete technical result, for example reduced latency through edge inference architecture, improved cache behaviour, protocol-level message reduction, or more robust cryptographic handling inside a secure element. Courts have struck down rejections where such effects were demonstrated and tied to claim elements.
Enablement and sufficiency under Section 10
Even where 3(k) is addressed, specifications often under-disclose training regimes, data constraints, or deployment details, which triggers enablement objections. For ML, “best method” means disclosing the training, inference, or scheduling details sufficient for a person skilled in the art to practice the invention, not merely naming a model family.
Inventive step over standards and papers
Most relevant prior art for AI/ML sits in standards, arXiv papers, and open-source repositories, not only in patents. Examiners deploy these as obviousness grounds. Applicants who pre-chart against such literature and isolate the system-level delta fare better.
Drafting AI/ML claims for India, a practical template
Start with a technical problem and measurable effect
State the computing bottleneck the invention overcomes and quantify outcomes. For example, “reduces inference latency by eliminating round-trip calls via on-device quantised model with adaptive precision scheduling” or “cuts false positives by a secure, on-edge anomaly filter that changes packet framing and reduces retransmissions”. This language aligns with the technical effect lens used by the CRI Guidelines and the courts.
Structure claims in layers
System or apparatus claim reciting concrete elements, for example an edge accelerator, DMA pathway, secure enclave interface, message-queue scheduler, or a model-serving runtime that alters memory and I/O behaviour.
Method claim that changes system operation, for example quantisation switching tied to bus contention, or dropout scheduling constrained by thermal envelopes.
Computer-readable medium claim as support, not the main event, because substance trumps labels under 3(k).
Evidence in the specification
Include benchmarks, profiling traces, and architectural diagrams that link claim features to results. The stronger Microsoft and Comviva outcomes turned on courts finding a tangible technical contribution on the record, not just drafting style.
Avoid red flags
Claims centered only on prediction accuracy or business outcomes, for example higher approval or conversion rates, read as abstract goals. Convert them into machine metrics, latency budgets, memory footprints, packet loss, or fault tolerance.
Refrain from “hardware dressing” like “a processor, a memory” without saying how configuration changes machine behaviour.
Prosecution strategy, step by step
Pre-filing: build a record you can rely on
Run controlled tests that show the effect of the claimed feature. Keep artefacts you can later file by way of voluntary submissions or affidavits if required. When possible, record protocol traces or performance counters rather than only application-level KPIs.
Anticipate 3(k) plus inventive step
Most AI/ML FERs raise both. Prepare differential charts against the closest standard or paper, then pivot to technical effect. Use Ferid Allani to establish the legal test and Microsoft to insist on a substance-based analysis. Where relevant to security, payments, or comms, leverage Comviva to underline protocol-level contribution.
Amend with a prosecution map
Plan fall-back claims that progressively crystallise the technical effect, for example binding an accuracy gain to a scheduler that reduces kernel context switches, or to an enclave-mediated key release that changes handshake retries. Many remands and favourable orders hinge on whether the record shows a real problem-solution chain.
Use hearings to teach the machine story
A consistent hearing brief walks the Controller from problem to claim to measurement. Avoid abstract adjectives, show before-after traces. Recent commentary on Microsoft underscores that conclusory “has technical effect” assertions will not do.
Frequently asked within the flow
Are pure ML models patentable in India?
If the claim is to a model as a mathematical construct, it risks exclusion. If the claim ties the model to a specific system architecture that yields a technical effect, protection is possible. Ferid Allani and Microsoft frame the test, and your specification must prove the effect.Do datasets need to be disclosed?
Disclose what is necessary to enable the skilled person to practice the invention, for example distributional characteristics, feature pipelines, and training constraints. Exact rows may be proprietary, but the best-method obligation remains part of Section 10 sufficiency.Will referencing open-source frameworks hurt?
No, but it can shift focus back to the technical delta. If you rely on PyTorch or ONNX, spell out the scheduler, memory, or deployment differences that deliver the claimed effect. Prior art charts should already defuse any “mere use of a library” objection.Should we file design or copyright alongside?
For UI/UX or model visualisations, consider design registration. For code, copyright exists upon fixation. These do not replace patents but can complement protection where Section 3(k) keeps some elements outside patent scope.Comparative note, briefly
Unlike the USPTO’s evolving subject-matter eligibility tests or the EPO’s two-hurdle COMVIK approach, India’s consistent core is Section 3(k) plus the CRI Guidelines’ technical effect lens. The Delhi High Court’s recent line of cases has moved practice closer to an effects-based analysis, but applicants still carry the evidentiary burden.
A checklist, AI/ML drafting and prosecution in India
Define the technical problem early, in computing terms such as memory bandwidth, network jitter, cache contention, secure key release, or thermal throttling.
State measurable outcomes and keep bench artefacts ready, including traces and counters, not only accuracy.
Lead with system or method claims, keep CRM claims supportive. Avoid claim sets that read like maths notes.
Pre-chart prior art from standards and papers to isolate the system-level delta before filing.
Use jurisprudence precisely: Ferid Allani for the technical-effect gateway, Microsoft for substance over labels, Comviva where protocol-level security or message handling is central.
Amend smartly with fall-backs tied to the measured effect, and use hearings to walk the Controller through the chain.
Mind sufficiency: disclose training or deployment specifics that make the result repeatable under Section 10.
The path to protection, distilled
AI and ML inventions are not off-limits in India, but they demand a disciplined technical narrative. If your specification shows a concrete machine-level problem, claims a system-changing solution, and evidences a measurable effect, Section 3(k) becomes a hurdle that can be cleared. The courts have opened the door for well-framed computer-implemented inventions. The way forward is to put hard engineering detail into the patent, map it to the CRI framework, and prosecute with data, not adjectives.