
TL;DR: Three Indian business schools (Jaipuria, IIM Sirmaur, Stride) claim to be “India’s first AI-native.” None of those claims contains a test. Apply the term honestly and the answer most B-schools are avoiding emerges: AI-native institutions are not built by changing what they teach. They are built by changing how faculty are paid. The operational test stays short. The faculty builds the tools, and the students generate the data. Everything else is marketing.
Last month I sat with an MBA applicant who had narrowed her search to four schools. She had spreadsheets. Fees. Placements. NIRF ranks. A column titled “AI integration” that was checkmarked across all four. When I asked her how she had filled that column, she said she had read each school’s website. Each one said “AI-driven curriculum.” Each one had a partnership announcement with someone. Each one had a course called something like Generative AI for Managers. The column was identical across four very different institutions.
She had not picked a school. She had picked a press release.
This is what happens when a piece of language stops doing work. The fix is not to abandon “AI-native.” The fix is to walk back to where the phrase came from and apply the original meaning to higher education.
What “AI-Native” Meant Before B-Schools Borrowed It
“AI-native” descends from “cloud-native,” which the Cloud Native Computing Foundation coined around 2015 to mark software built for the cloud, not ported to it. A cloud-native app assumed elastic infrastructure, containers, and continuous deployment from day one. An app moved to the cloud did none of that. Both ran on AWS. Only one was cloud-native. The word did real work because it identified an architectural commitment.
The current working definitions of “AI-native” sit in venture capital.
a16z’s Greenfield Strategy: AI-Native Startup Bingo writes that software is becoming labor. The product is not a tool a human picks up. It is a unit of work the model delivers. Mercury, Stripe, Cursor, ElevenLabs, Decagon. The same essay reads the incumbents (Zendesk, Workday, NetSuite) for what they are. They “have hostages, not customers,” locked in by switching costs. The AI-native challengers acquire customers fresh and price on usage or outcome, not seats. The model layer absorbs functions that used to need an engineer.
YC’s Summer 2026 Requests for Startups extends the logic. Diana Hu writes that “the best AI-native companies we’re seeing have figured out something most haven’t: they’ve made their entire company queryable.” Treat AI agents as the operating system of the firm, not as a feature bolted on top of older software. Headcount per unit of revenue stays small because the model layer keeps absorbing what used to take staff.
When business schools borrowed the word, they borrowed a definition with that much architecture in it. They mostly did not honour the architecture.
The Translation Indian B-Schools Are Avoiding
If AI-native means AI absorbs the work staff used to do, the educational equivalent is direct. Explanation. Drilling. Feedback. Rubric-based grading. Office-hour Q&A on textbook concepts. The repetitive bulk of teaching. All of it can now be absorbed by a platform layer. What remains is the part the platform cannot do. Judgment when a case is ambiguous. A student who looks confident and is not. The moments where a credentialed human in the room actually changes what the student does next.
Sangeet Paul Choudary names this move in Reshuffle. AI does not substitute for tasks. It unbundles a job: the rote half goes to the model, and the remainder gets rebundled into a different shape. His example is the trucker. The trucker once chose the route, and that judgment was where the premium came from. Now the platform picks the route and the driver executes. The trucker has not lost the job. The trucker has lost the part of the job that paid premium.
The same unbundling is underway for Indian B-school faculty. The lecture. The office-hour explainer. The rubric-graded assignment. Those are the routes the platform now picks. What is left is the chamber-consultant role: high-judgment, low-frequency, reserved for the moments the platform cannot resolve. Choudary’s line lands here too: “companies don’t need an AI strategy. They need a strategy for the conditions that AI creates.” B-schools do not need an AI curriculum. They need a strategy for what the faculty job becomes once AI absorbs the rote half.
The Real Problem Is a Contract, Not a Curriculum
Indian B-schools cannot become AI-native by changing what they teach. They have to change how faculty are paid, evaluated, and tenured.
Faculty tenure in Indian B-schools is structured around metrics that select for the activity AI absorbs most easily. Publication count. Course load. Hours taught. Student feedback on lecture delivery. These metrics are stable when most of teaching is human labour. They break the moment a platform absorbs sixty per cent of the explaining and drilling. A faculty member optimising for current tenure will publish a paper about AI and add an AI module to their existing course. A faculty member optimising for an AI-native institution will ship a product students actually use, reduce their lecture hours, and spend the freed time on chamber consultation and dashboard review. The current tenure committee does not know how to value any of that.
This is why the AI-native conversation in India has stayed at the curriculum layer. Curriculum is the part the existing contract can absorb. Faculty redesign is the part it cannot.
BITSoM Mumbai has come closest. They made students build and ship an AI artefact to graduate. That policy is real, and I respect it. But it is a requirement on students, not a redesign of what faculty are paid to do. The harder move is on the faculty side, and nobody has attempted it at scale. The unit of value in a faculty contract would have to shift, from “lectures delivered plus papers published” to “chamber consultation hours plus a working artefact deployed to students.” Nobody in Indian higher education has written that contract yet. The institution that does will be the first AI-native B-school in the architectural sense, regardless of which one wins the PR race.
An AI-native B-school has one defining characteristic: the faculty builds the tools and the students generate the data. Everything else is marketing.
The Two-Halves Test
Both halves of the test have to be true at once. Each half is independently observable, dated, and falsifiable from outside the institution if you ask the right questions.
The first half (faculty builds tools) is the harder one in India, because tenure economics actively select for faculty who write papers about AI rather than faculty who ship code. The second half (students generate data) is the harder one to fake, because the data either exists in a dashboard you can show or it does not. A school that passes both halves has, somewhere in its structure, redesigned a faculty contract. A school that fails either half has not.
What the Test Reveals at Jaipuria
I will apply it to one institution I work at, because it is the one I have the most data on. Jaipuria Institute of Management currently runs a faculty-built AI platform called Rehearsal across four campuses. As of May 2026, 2,658 students have completed 4,919 AI-powered interview rehearsals, plus 634 AI CV-review sessions and 499 AI aptitude sessions. The campus split is concrete: Noida 706, Lucknow 563, Jaipur 387, Indore 72. The peer-reviewed research basis is published in SAGE Business and Professional Communication Quarterly (Kakkar, Sharma, and Agrawal 2025). I cite these numbers because they are reversible only by deleting completed sessions; they grow weekly; and they map to specific student names in a backend, not to a marketing claim.
This is the evidence the test asks for. Almost no Indian B-school can produce it, because almost none has faculty who write production code, and none has restructured a faculty contract around AI deployment.
I know of one other Indian B-school faculty member who has built and deployed AI tooling his students actively use. Anand Nandkumar at the Indian School of Business has shipped a GPT-based tutor and VR-based decision exercises into his MBA classroom. He is the only other person I can name who meets the first half of the test cleanly. Two faculty members across India’s 4,000+ business schools. That smallness is the contract argument.
BITSoM’s policy is real but is a requirement on students, not on faculty. IIM Sirmaur and Stride Business School are making programme-design claims, which is again different from deployment evidence. None passes the two-halves test.
The 5-Minute Checklist
For an applicant comparing schools, the test produces five questions any admissions interview can be steered toward.
- Name one tool a faculty member at this institution built that students are using right now. Not licensed. Built. If the answer is “we have a ChatGPT subscription for everyone,” the answer is no.
- What is the dashboard URL where I can see usage numbers from your AI tools? Not the prospectus. Not a slide. A live dashboard, even if access is restricted to applicants who ask.
- Which faculty member’s name is associated with which AI artefact? Faculty co-authorship of papers about AI is not the same as faculty production of AI. The relevant question is which person you can call if the tool breaks.
- What percentage of your incoming students are using these tools in regular academic workflows by end of trimester one? Adoption inside the first ninety days is the only adoption that matters; anything later is post-graduation noise.
- Has any faculty member’s contract or workload been formally restructured around AI deployment? This is the question that flushes out marketing from architecture. If no contract has changed, the institution has not yet absorbed what AI-native means.
If a school cannot answer three of these five in a single interview, the AI-native label is decorative. That does not mean the school is bad. It means the AI-native attribute is doing no work in the decision, and the applicant should pick on other margins (placements, location, fees, fit) where the data is real.
Why Rehearsal Is the Substrate, Not the Destination
Jaipuria is not yet AI-native the way Decagon is. The chamber-consultant faculty model is partial. The LMS is not yet the primary student surface for every course. Per-outcome pricing, the educational analog of an AI-native startup charging per resolution rather than per seat, does not exist in Indian higher education at all. The fee structure is set by regulation. The unit is still a two-year degree.
What Jaipuria has done is ship the first artefact that makes the transition operationally visible and informally restructure one faculty role around it. Rehearsal is faculty-built and student-used, with the dashboard numbers above. One faculty role is now structured around shipping and maintaining the platform, not around lecture hours or publication count. The product is not the destination. It is the substrate that makes the institutional change testable from outside. The contract change is what would let the substrate persist beyond one faculty member, and that part is still to do.
Why I’m the One Saying This
I sit at a strange vantage. I teach Management Development Programs at XLRI Jamshedpur, IIM Ranchi, and IIM Rohtak, to managers from HDFC Bank, Infosys, Max Healthcare and 60+ other organizations. That cohort wants to know whether the institutions they are sending teams to actually understand AI or just rebrand around it. Their question is the credential question, which I wrote about in the IIM GenAI courses series earlier this year.
The other audience, the applicant in front of me with the spreadsheet, is asking the same question one step earlier in life. She is choosing where to spend two years and ten lakh rupees. The test in this essay is the one I would give her.
I work on this problem at Gradeless, the venture that builds Rehearsal. Rehearsal is the platform Jaipuria deploys; Jaipuria AI Labs is the unit inside the institution that ships it. My personal answer to the test is that I built the thing, the students use the thing, and the dashboard is queryable. My institutional answer is more honest. My own faculty contract is informally restructured around the product. The formal contract architecture that would make this reproducible across faculty has not been written yet, not by us and not by anyone else in India.
The applicants do not need new vocabulary. They need a test that lets the existing vocabulary do work, and an institution willing to redesign the part of itself that the marketing layer is hiding. The faculty builds the tools, and the students generate the data. Everything else is marketing.
About the author
Dr. Shiva Kakkar runs Gradeless, the AI venture that built Rehearsal — a mobile-first capsule learning platform delivering 15-minute interactive courses on management, business strategy, and AI for managers. He teaches Management Development Programs in leadership, organizational behavior, and AI strategy at XLRI Jamshedpur, IIM Ranchi, IIM Rohtak, and other top-tier Indian B-schools. Gradeless’s platforms are deployed across Jaipuria Institute of Management and the Seth M.R. Jaipuria K-12 schools network. Shiva writes on educational AI, organizational behavior, and the socio-economics of credentials at shivakakkar.com. Connect on LinkedIn.