Lee Hickin, recently overseeing Microsoft’s AI policy in Asia, will helm the National AI Centre in Australia, guiding a $21.6 million federal investment aimed at boosting the nation’s artificial intelligence capabilities.
In a LinkedIn post announcing his new role, Hickin stated his primary responsibility as director will be “helping to drive Australia’s AI future.” He is also tasked with advising on the implementation of debated safeguards for emerging AI technologies.
The National AI Centre, managed by the Department of Industry, Science and Resources (DISR), provides financial support to local AI startups and assists small to medium businesses in utilizing machine learning (ML).
Balancing Innovation and Regulation
Hickin will work with the DISR’s AI Expert Group. Together, they will help industry adopt Australia’s Voluntary AI Safety Standard. Furthermore, the government is considering elevating the standard to mandatory guardrails for high-risk AI use cases. Vendors are lobbying against regulations, citing concerns that they could stifle innovation.
While Hickin declined to comment on his stance regarding the Albanese Government’s proposal for EU-style AI regulations, he has previously expressed the need for a balance.
“AI needs guardrails to protect society and AI needs flexibility to address global issues,” Hickin said last year at a roundtable hosted by the Malaysia Digital Economy Corporation.
In his recent LinkedIn post, Hickin emphasized his long-standing belief in AI’s positive potential:
“I have long been an advocate for the positive potential for AI in our lives, communities and industry.”
A ‘Perfect’ Fit
Industry Minister Ed Husic lauded Hickin’s extensive commercial background. According to Husic, Hickin’s “30 years of commercial experience” at “companies like Microsoft and Amazon” made him “an ideal fit” for the National AI Centre leadership position.
Hickin’s career includes two stints at Microsoft. From 2005 to 2015, he held titles such as security technology specialist and IoT product manager. He then worked at AWS for two years as APAC IoT business development lead and APAC head of platform technology business development. He returned to Microsoft in 2018 as its chief technology officer and was promoted in 2023 to Asia AI Policy and Lead.
Husic highlighted Hickin’s contributions to shaping AI policies with the government, including his support for developing the “AI Action Plan.” In addition, he has four years of experience as a committee member of NSW’s “AI Review Committee” (AIRC), Australia’s first AI-specific government watchdog.
Navigating Complexities
Hickin’s experience in encouraging companies to embrace AI and in conducting audits of new technology positions him well for this role.
Hickin has experience in both encouraging companies to embrace AI, and auditing ML projects. He held the role of head of Microsoft’s ANZ Responsible Office of AI. He also served as a member of the NSW Government’s AIRC.
The government must decide whether to follow the EU’s approach in enacting laws to protect citizens from the risks linked to AI. These risks include privacy and fairness concerns stemming from the private sector’s rapid adoption of AI. Regulators and civil society groups are advocating for robust regulation. The government could instead follow the US approach and abstain from regulations that could hinder AI-enabled productivity. Lobbyists, like the Business Council of Australia, support this more hands-off approach.
During Hickin’s concurrent work for AIRC and Microsoft, he introduced Microsoft technology to NSW government agencies that AIRC had raised concerns about. This demonstrated the challenges of balancing AI innovation with careful risk management.
At the time, he said that the opportunity to work alongside NSW Police while implementing the “AI Insights platform” was a “privilege.” He said it “can speed up the analysis of evidence, accelerating justice.”
However, AIRC’s review of Insights cautioned that the AI tool could bias investigations. In particular, they were worried about investigations targeting communities more likely to feature in surveillance feeds.
Furthermore, legal academics expressed worries about the use of biometric ML models. These models pose a higher risk of misidentifying minorities. However, Hickin refused to release the audits conducted by Microsoft’s Office of Responsible AI to evaluate the platform’s risks, nor did he provide a summary of those audits.