Tokyo, Japan

RIKEN Announces Full Transition to "AskDona" Generative AI Chat for Initial Inquiries on the Supercomputer "Fugaku" Support Site

February 18, 2025

Please note that this article is been translated by generative AI.

TOKYO, JAPAN
– GFLOPS Co., Ltd. (GFLOPS) and RIKEN are pleased to announce that the support site for the Supercomputer "Fugaku" (the "Fugaku Support Site") has fully transitioned its initial inquiry response system from a manual, human-operated process to an automated one powered by the generative AI assistant, "AskDona"*¹. Previously, initial inquiries submitted via a form were handled by human staff.

"AskDona" was first introduced in July of last year as a generative AI chat application. It utilizes Retrieval-Augmented Generation*² (RAG) technology with the goal of promoting self-resolution for users' initial inquiries on the Fugaku Support Site. The decision to adopt "AskDona" followed a rigorous evaluation of factors such as its high response accuracy. Its unique RAG architecture, capable of simultaneously processing a large number of diverse files, was particularly highly praised.

For more details about the press release regarding the initial implementation of AskDona, please see here.


Over an operational period of approximately six months, "AskDona" demonstrated its ability to provide accurate answers to inquiries from "Fugaku" users and proved to be a highly effective tool for promoting self-resolution. Furthermore, after a careful data analysis of factors including hallucination risks and user engagement trends, it was determined that the AI's response accuracy and practicality had reached a sufficient level to handle all initial support inquiries. Based on this successful track record, as of February 2024, all initial user inquiries on the Fugaku Support Site have been fully transitioned to the "AskDona" generative AI assistant.

The RIKEN Center for Computational Science (R-CCS) is actively promoting the use of generative AI. To learn more about the center's initiatives in this area, please refer to the following interview article .

「The Future of the Supercomputer "Fugaku" and Generative AI: A Vision from the RIKEN Center for Computational Science (R-CCS)

In this exclusive interview, Director Satoshi Matsuoka shares his vision for the societal implementation of the world-class supercomputer "Fugaku" and offers his profound insights on its integration with generative AI.
  
[Read the interview]


About GFLOPS Co., Ltd.

GFLOPS Co., Ltd. leverages cutting-edge AI technology and data analysis expertise to provide AI solutions that support corporate operational efficiency and innovation. In particular, its proprietary solutions combining Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) technology achieve high response accuracy and flexibility, leading to their adoption by numerous companies.Company Name: GFLOPS Co., Ltd.Representatives: Maria Morimoto, Co-CEO; Ryosuke Suzuki, Co-CEOHead Office: Shibuya-ku, Tokyo, JapanBusiness Activities: Development and provision of AI services utilizing Large Language Models (LLMs), generative AI technologies, and more.Website: https://gflops-ai.com/About RIKEN
As Japan's only comprehensive research institution for the natural sciences, RIKEN conducts research across a broad spectrum of fields, including physics, engineering, chemistry, mathematical and information science, computational science, biology, and medical science. To disseminate its research outcomes to society, RIKEN actively engages in joint research and commissioned research projects with universities and private companies, and promotes the technology transfer of its intellectual property to industry.President: Makoto GonokamiHeadquarters: Wako, Saitama, JapanWebsite: https://www.riken.jp/

AskDona: A generative AI chat assistant implemented by the RIKEN R-CCS on the Fugaku Support Site. It is designed to answer user questions by referencing "Fugaku" manuals and technical documents using applied RAG technology. AskDona Website: https://askdona.com

Retrieval-Augmented Generation (RAG): A technique that enables a Large Language Model to reference external documents or data sources in real-time when generating a response. This helps to suppress hallucinations (the generation of false information) and improve the accuracy of its answers.




&

ARTICLES