What Universities Are Actually Paying For When They Hire Consultants

The New South Wales Parliamentary inquiry into university governance has reignited a familiar debate about consultants in higher education. The critique is blunt: universities spend too much on external advice, the outcomes don't justify the cost, and the money would be better directed elsewhere — or recovered by using internal expertise instead.

There is something worth engaging with in that critique. But the diagnosis is incomplete, and the prescription is worse. Understanding why requires looking past the invoice and examining what actually sits underneath it.

The business model is the problem — not the consultants

Large professional services firms are structured as pyramids. A small number of senior partners sit at the top. A mid-tier of project managers coordinates delivery. A large base of graduate analysts does the actual work — capable, intelligent, and completely without deep sector knowledge.

This structure exists to serve a specific business logic. Partners are incentivised through bonus structures tied to revenue. The larger the engagement, the larger the bill, the larger the bonus. The margin charged to clients does not reflect the cost of the work being delivered — it reflects the cost of the business model sitting above it.

What this means in practice is that universities are not overpaying for expertise. They are overpaying for a profit structure that has nothing to do with their problem. Strip out the margin, pay the graduate analysts at their actual cost-recovery rate, and the invoice would look very different. The work would be the same.

This is the critique the parliamentary inquiry should have landed on — and mostly didn't.

Why internal expertise is not the answer either

The alternative proposition — that universities should solve strategic problems using their own people — is politically appealing and practically incoherent.

Universities do have deep expertise. They have economists, organisational theorists, policy scholars, and finance researchers. But there is a fundamental difference between expertise in a field and professional practice in that field. An academic who studies organisational transformation is creating and disseminating knowledge about transformation. That is their job. It is not their job to manage a project team, redesign operational systems, and drive an eighteen-month implementation inside their own institution.

Beyond the job description, there is the political reality. Internal people are already implicated in the problems they would be asked to solve. Choosing whose internal model prevails in a contested organisational question is a political act, not a technical one. And there is no reward structure that would motivate a senior academic to set aside their research agenda for a year to run an operational transformation project. The incentive simply does not exist.

What external consultants bring that internal expertise structurally cannot is the ability to operate outside the political economy of the organisation. That is genuinely valuable. The question is whether that value is being captured in the engagements universities are actually commissioning — and at what price.

The cookie-cutter trap

The dominant model of university consultancy compounds the structural problem with a methodological one.

Large firms typically arrive with a predetermined framework for what good looks like. They map the client's existing practices against that framework, identify the gaps, and recommend changes to close them. It is efficient, repeatable, and routinely produces recommendations that do not fit the organisation they were designed for.

The reason is not that the frameworks are poorly designed. It is that strategy cannot be reduced to gap analysis against a universal template. There is no such thing as research strategy best practice that applies equally to a sandstone research university, a regional institution serving a specific community, and a specialist medical research institute. The founding legislative acts are different. The histories are different. The cultures, workforces, funding bases, and interpretations of institutional purpose are different.

Arriving with a pre-formed hypothesis about what the answer should look like closes down the diagnostic work that actually produces fit-for-purpose solutions. The right approach starts not with a model but with a question: what is this organisation genuinely trying to achieve, and what is currently preventing it?

That requires a different kind of consultant — one who treats problem framing as primary and solution design as secondary, and who has the sector knowledge to do that work credibly.

What the sector actually needs

The homogeneity now visible across Australian higher education is not accidental. It is the accumulated result of twenty years of institutions optimising themselves against external funding stimuli — ARC and NHMRC grant mechanisms, international student revenue, ERA rankings — rather than against a clearly articulated institutional purpose.

Those stimuli are not irrelevant. Regulatory compliance is the price of entry. But when external funding mechanisms become the primary organising logic of a research institution, the institution gradually loses the capacity to think strategically about what it is actually for.

The leaders who are now navigating the sector's financial strain most effectively are, in most cases, the ones who have done the harder work of articulating a differentiated institutional purpose and redesigning their organisation around it. That means asking genuinely difficult questions about which funding streams serve the institution's purpose and which ones the institution has simply been chasing — and being willing to divest from the latter.

The consultancy that serves universities well in this environment looks quite different from the pyramid model. It brings senior people with genuine sector experience — people who have worked in and led research organisations, who understand the cultures and political realities from the inside, and who have seen enough variation across institutions to know that the answers are never generic. It operates with a defined methodology, because professional rigour matters, but that methodology is oriented toward diagnosis and co-design rather than template delivery. And it is structured in a way that puts the senior expertise at the point of delivery, not behind glass at the top of an org chart.

Insights

The parliamentary debate about university consultancy spending is asking the wrong question. The issue is not whether universities should use external advisers — they should, and the reasons are structural and sound. The issue is what they are paying for and whether they are getting it.

Institutions that get genuine value from external consultancy share three characteristics. First, they insist on being diagnosed before they are prescribed to — they do not accept engagements that begin with a solution already in hand. Second, they commission work that connects directly to a clearly articulated institutional purpose, which means that purpose has to exist and be genuinely differentiated before the consultant walks in the door. Third, they pay for senior sector expertise at the point of delivery, not for a business model that sits above it.

The universities now making the most consequential strategic decisions — reformulating their research base around long-term organisationally-owned funding, restructuring around purpose rather than funding mechanisms, building cultures that can deliver against a specific institutional mission — are not doing so by running best practice surveys. They are doing so by asking better questions. The right external adviser helps them do exactly that. The wrong one sells them the illusion that the question has already been answered.

Check out the full discussion on RSA's official podcast: Inside Research Strategy

https://www.youtube.com/watch?v=YAkDdBzfBE8
More articles

Discover how we can help.

Book a call with one of our experts