What does AI make of SERD?

The Australian Government's Strategic Examination of Research and Development (SERD) was, by any measure, a serious consultation. Six issues papers. Nine months of public engagement. And 344 unique submissions from universities, research institutes, industry associations, startups, civil society groups, individual researchers, and everyone in between — collectively representing the full breadth of the country's research and innovation community.

The problem with serious consultations is that they produce serious volumes of material. Reading every submission isn't something a single person, or even a small team, can do with meaningful analytical rigour. So when the Ambitious Australia final report landed, most of us had to take on faith that it reflected what the community actually argued, rather than just what the panel found convenient.

I wanted to test that assumption. So I used AI to read everything.

The Process

The SERD consultation portal hosts submissions as a mix of text-form responses and uploaded PDFs and Word documents. Getting all of that into a format an AI could analyse was relatively straight forward for Claude Code from Anthropic. The portal is a single-page application, and what look like downloadable files are, in many cases, HTML shells. Accessing the actual content of 118 PDF file-based submissions required navigating to each submission's view page, where the platform renders an automated transcription of the uploaded document. Claude managed to extract these and save them on my local desktop with little effort.

Getting 357 text-form responses entered directly into the portal was slightly more difficult, and required a second pass to complete. The result was a corpus of 475 submissions — totalling millions of words — which I then fed to Claude.

Claude was asked to do something specific: perform a thematic analysis of the submissions and produce up to 20 recommendations, based exclusively on what submitters argued. Not a restatement of the issues papers' own proposals. Not a summary of what the government suggested. A synthesis of the community's actual positions — their diagnoses of what's broken, their criticisms of proposed changes, and their specific reform suggestions, supported by direct quotations.

It was then provided the Ambitious Australia final report and asked to compare its own recommendations with the report.

The output is two documents. Both are 100% AI-generated. No manual coding, no human thematic analysis, no editorial judgement from me about what mattered, and no editing of the final documents. The AI read the submissions; the AI wrote the analysis.

What the Research Community Actually Said (according to AI)

The single most consistent theme — appearing in 346 of 475 submissions — was a call for an overarching National RD&I Strategy. This sounds like support for the SERD process itself, but it wasn't. Submitters, including the Group of Eight, Universities Australia, the Business Council of Australia, and hundreds of individual researchers, were arguing that releasing six disconnected issues papers without an integrating strategic framework was itself a symptom of the fragmentation the review was supposed to fix. The University of Sydney put it plainly: the approach amounted to "cherry-picking parts of the system — fixing one thing here, and another thing there — rather than looking at it from a holistic view."

The second most prominent theme (335 submissions) concerned how focus areas for national RD&I investment would be selected and defined. Submitters broadly supported mission-oriented research but raised pointed concerns about rigidity and capture. The Australian Academy of the Humanities argued the definition of R&D was "too narrow," and many submissions warned against locking in a fixed sectoral structure that could default to incumbent industries. A recurring suggestion was that enabling technologies — AI, quantum, data infrastructure — should be treated as horizontal capabilities running across all missions, not as one sector competing with others.

Other prominent themes: the structural barriers to commercialisation (326 submissions), the inadequacy of current research evaluation frameworks (305), the capital and ecosystem gaps facing SMEs and deep tech ventures (277), workforce precarity and career pathway failures (238), and the need for research governance that is genuinely independent from political and industry influence (235).

Three themes that featured strongly in submissions received little attention in public debate: international collaboration and positioning (156 submissions calling for a proactive strategy), IP reform and open access mandates (124 submissions), and a specific, evidenced call to close the $786 million annual funding gap between allocated funds and the real cost of conducting research at Medical Research Institutes.

How Does the Final Report Compare?

The second document maps the submission analysis against the Ambitious Australia final report, recommendation by recommendation.

The areas of genuine alignment are encouraging. On capital and investment reform — angel investor incentives, VC scale, superannuation deployment, fund-of-funds, and exit pathways — the panel and the research community diagnosed the same problems and broadly agreed on solutions. The same is true for workforce development, administrative simplification, and the need for outcome-based measurement rather than input metrics.

The divergences are more instructive.

University research specialisation is the starkest. Recommendation 3 of the final report would allow universities to reduce the breadth of their research activities and concentrate on areas of competitive advantage. In the submission record, this proposal was actively and repeatedly opposed — particularly by smaller universities, regional institutions, and humanities and social science faculties, who argued that research breadth is not inefficiency but intellectual insurance. The panel adopted the recommendation despite this opposition.

Governance independence is the most structurally significant divergence. Across 235 submissions on coordination and governance, a consistent demand emerged for a body insulated from political short-termism and industry capture — with statutory independence, fixed terms, and transparent conflict-of-interest rules. The final report's National Innovation Council reports directly to the Prime Minister and the Minister for Industry. This is precisely the governance model submitters warned against, and the report does not address the capture concerns raised in submissions.

Mission framework design follows the same pattern. Submissions cautioned against locking in fixed sectoral priorities. The final report prescribes exactly six fixed National Innovation Pillars. "Technology" appears as one pillar among equals — the structure submitters cautioned against — while no pillar explicitly accommodates humanities, social sciences, cultural industries or service-based sectors.

There are also notable absences. International collaboration strategy, IP and open access reform, and a funded plan to close the research cost gap all featured prominently in submissions. None produced a standalone recommendation in the final report.

Why This Matters

Consultation processes are only as valuable as the accountability mechanisms that surround them. When a government asks for public input and then publishes a final report, the community is largely reliant on trust that the input shaped the outcome. Most people don't have the time or the tools to check.

AI changes that calculus. What took months of manual reading and coding in a traditional policy analysis can now be done in hours. The analysis here isn't perfect — automated transcription introduces errors, and thematic coding by AI carries its own assumptions — but it is transparent, reproducible, and grounded in the actual text of submissions.

The Ambitious Australia report is a serious piece of work and many of its recommendations are well-founded. But the comparison document suggests that on at least three significant structural questions — governance independence, university specialisation, and mission framework rigidity — the panel made choices that diverged from the clear weight of submissions. That's not necessarily wrong. Expert panels are not obliged to follow the majority view. But the community deserves to know where the divergences are.

Both documents — the thematic analysis of 475 submissions and the comparison with the final report — are available below. I'd welcome responses from anyone who worked on the consultation, contributed a submission, or has a view on whether this kind of AI-assisted analysis is useful for public policy accountability.

Both documents referenced in this post are publicly available below. The thematic analysis and comparison were generated by Claude (Anthropic). The author provided the link to the submission data portal and directed the analytical framing through a single prompt; no manual thematic coding was performed.

Thematic Analysis Document

Comparison Analysis Document

More articles

Discover how we can help.

Book a call with one of our experts