Last month, MLCommons President Peter Mattson joined several industry leaders and experts to discuss the road forward for responsible AI innovation at the Asia Tech x summit in Singapore. The panel discussion, titled “Innovating Within a Secure, Reliable and Safe AI Landscape,” focused on finding a balance between mitigating risk and fostering innovation as AI continues to evolve. 

Mattson was joined by panel moderator Denise Wong, Assistant Chief Executive at the Infocomm Media Development Authority, alongside the following experts:

  • Yoichi Iida, Special Policy Adviser to Minister, Ministry of Internal Affairs and Communications, Japan
  • Dr. Bo Li, Professor, University of Illinois at Urbana-Champaign and Chief Executive Officer, Virtue AI
  • Dr. Chris Meserole, Executive Director, Frontier Model Forum
  • Professor Lam Kwok Yan, Executive Director, Digital Trust Centre Singapore and Singapore AI Safety Institute

Throughout the discussion, Mattson emphasized the importance of utilizing accessible data in developing reliable and secure AI systems — including via MLCommons’ Croissant tool, which summarizes and describes datasets. “Data is what determines the behavior of your system,” said Mattson. “It’s not the python, it’s the data that was used to train, fine tune, and prompt the AI. That is the key to success.”

Mattson also highlighted MLCommons’ partnership with the AI Verify Foundation in Singapore to develop benchmarks — like AILuminate — that help enterprise leaders understand and mitigate the risks of AI deployment. As part of the summit, MLCommons and AI Verify Foundation were proud to release proof-of-concept AILuminate testing in Chinese – a major milestone in ensuring clear, rigorous LLM benchmarking in the world’s most spoken language.  

While in Singapore, MLCommons also released updated reliability scores for a new and expanded list of LLMs, as well as a new partnership with NASSCOM — India’s top tech trade association — to expand independent, empirically-sound benchmarking across South Asia. 

“MLCommons has a mission to make AI better for everyone,” said Mattson. “We interpret that as growing the market in a way that delivers benefits to us all.”

The MLCommons AI Risk & Reliability working group is a highly collaborative, diverse set of experts committed to building a safer AI ecosystem. We welcome others to join us.