輸入您購票時的 Email:

AlI Sessions列表頁 - 2023 Taiwan AI Academy Conf

VIP / Speakers

頁數: 1 2 3 4 - 每頁 20 筆

共有 61 位講者

All Sessions:

JongJye Sheen

JongJye Sheen AMD Instinct GPU – 1.19 ExaFlops System and LLM

AMD has been designing high end Graphic Processors for a long period. In the 2010s, AMD joined the HPC project with DOE in the States and delivered the first ExaFlops system – Frontier. Frontier has not only achieved more than 1.1 Exa 64 bits Flops (2022 Jun), but also is the first system to deliver more than 60GFlops/Watt which provides extraordinary energy efficiency. With the breakthrough in performance, the comprehensive GPU software development environment named ROCm was used to provide necessary tools (Compiler, Debugger, Profiler) and software libraries for the software experts to develop GPU codes to run on AMD MI-Instinct GPU. ​ AMD MI250X GPU was the GPU that used to build the Frontier system. The MI250X GPU Architecture has Hardware level Unify memory feature. CPU and GPU are connected with AMD XGMI Infinity fabrics. The following on generation MI300A, which is another breakthrough that integrated CPU and GPU in the same Socket to have all HBM shared between GPU and CPU which could eliminate the costly memory copy between System memory and HBM memory. Expected memory bound applications will be benefited by this architecture. Another coming product, MI300X, which follows OAM design standard, will provide 192GB HBM3 memory and 5.2TBs internal HBM memory bandwidth, that is suitable for LLM applications. AMD chip-let architecture has been proven as a successful technology that provides flexibility and scale up ​capability within a socket, and is able to deliver versatile processing/memory ​combination on a single Socket to support the growing demand of ​different workload​. AMD ROCm is now an official support platform of Pytroch, Many AI models and Frameworks has been supported on ROCm environment and lots of them has containers built for test and available in GitHub. AMD ROCm inherited the philosophy of Open Source Community. AMD ROCm HPL and HPCG source code is downloadable. ([community.amd.com 連結]). AMD believe with collaboration and share with the Open Source Community, technology development will be thrived and fruitful. ROCm learning environment and materials is also available on WEB for general access.
more
Hsin-Ying Lee

Hsin-Ying Lee From 2D to 3D / From Image to Video

Generative models, particularly diffusion models, have revolutionized the field of artificial intelligence by enabling machines to autonomously generate realistic and creative content. While the primary emphasis has been on generating and manipulating static 2D images, recent months have borne witness to a captivating surge in advancements pertaining to 3D content and dynamic videos. This juncture marks a significant extension in both spatial and temporal dimensions, transitioning from 2D to 3D and from images to videos, respectively. In this intricate expansion lies a confluence of intriguing shared properties and formidable challenges. This talk will briefly introduce the current research landscape for both 3D and video synthesis, shedding light on their distinctive attributes and common hurdles.
more
Steven Yin

Steven Yin Welcome to SysMoore Era - What can AI/ML EDA Tools to Unleash the Moore Law?

In the SysMoore era, we are facing a lot of challenges. This talk will cover “The Five Overarching Themes” in the current IC design, including (i) PPA push, (ii) First Tapeout works, (iii) Multi-die/3D systems, (iv) Safe, secure, resilience silicon, and (v) Productivity. How Synopsys EDA uses AI/ML to cope with those challenges.
more
Yennun Huang

Yennun Huang Academia Sinica Service LLM Wisdom: Knowing What it Actually Don't Know

With the rise of large language models, many companies and organizations have attempted to leverage this powerful tool to manage their proprietary data. At Academia Sinica, we are also focused on developing a useful LLM service system, SinicaWisdom, to facilitate administrative work. Generally, large language models provide responses based on the information they have encountered in the training data. However, when faced with questions they cannot answer, large language models tend to provide incorrect responses rather than admitting they don't know. This behavior necessitates a significant amount of effort in terms of re-inspection or re-examination. Furthermore, to restrict the information used by large language models for proprietary data, an information retrieval system is typically employed as the front-end system to initially gather documents relevant to the question. This practice limits the knowledge that LLMs can access and exacerbates this issue. In this presentation, we will discuss how we have enabled SinicaWisdom to"acknowledge what it does not know"and provide additional helpful information instead of simply stating"I cannot answer."
more
 Lun-Wei Ku

Lun-Wei Ku Academia Sinica Service LLM Wisdom: Knowing What it Actually Don't Know

With the rise of large language models, many companies and organizations have attempted to leverage this powerful tool to manage their proprietary data. At Academia Sinica, we are also focused on developing a useful LLM service system, SinicaWisdom, to facilitate administrative work. Generally, large language models provide responses based on the information they have encountered in the training data. However, when faced with questions they cannot answer, large language models tend to provide incorrect responses rather than admitting they don't know. This behavior necessitates a significant amount of effort in terms of re-inspection or re-examination. Furthermore, to restrict the information used by large language models for proprietary data, an information retrieval system is typically employed as the front-end system to initially gather documents relevant to the question. This practice limits the knowledge that LLMs can access and exacerbates this issue. In this presentation, we will discuss how we have enabled SinicaWisdom to"acknowledge what it does not know"and provide additional helpful information instead of simply stating"I cannot answer."
more
Ted Chang

Ted Chang Prompting The Future: Transforming Medicine through Generative AI

Medicine, a discipline rooted in centuries of knowledge and practice, is now at the brink of one of its most transformative phases, spurred by the rise of generative AI. This speech delves into this impending revolution, shedding light on how artificial intelligence is poised to reshape our healthcare paradigms from the perspective of Quanta Computer, a leading AI platform provider for smart medicine and digital health. From AI-driven diagnostics that predict with unprecedented accuracy to personalized treatments tailored to individual genetic imprints, we are witnessing the dawn of a new era. This discourse not only celebrates the remarkable innovations that AI promises but also engages in a critical examination of the challenges and ethical dilemmas it presents. As we navigate this uncharted territory, the speech emphasizes the need for synergy between human insight and machine intelligence, ensuring that the future of medicine is not just technologically advanced but also
more
Lawrence Lessig

Lawrence Lessig AI and the Law

AI will change law fundamentally. The only question is whether it will change it in ways we like. There is no doubt that the technology will radically alter how we make and practice law. Through code, those changes will enhance the rule of law. But they will also make urgent the need to better govern the values that law embeds and makes real. In principle, AI could make law orders of magnitude better. In practice, we have yet to demonstrate — at least we in America — the capacity to govern that this new technology demands.
more
Amy Hu

Amy Hu Generative AI on Intel® architecture

Generative AI is causing impacts across various industries worldwide. For Taiwan's industries, it represents a significant challenge, but also a rare opportunity. Currently, there are numerous open-source generative AI pre-trained models, platforms services and resources available. The ability to optimize open-source pre-trained models and tools will be crucial for unique competitive advantages. Intel AI Everywhere can accelerate innovation throughout the entire AI product development cycle. It allows you to maximize the value of your products and services, and enables easy deployment on both the cloud and edge, while maintaining information security and regulatory compliance. Through examples of popular applications like Stable Diffusion and Llama2 in generative AI, we will explore how the advantages of Intel's software and hardware architecture empower everyone to grasp the opportunities of generative AI. 生成式 AI 正在對全球各個產業帶來衝擊,對臺灣產業是巨大挑戰,更是難得的機遇. 目前有許多 open source 的生成式 AI 預訓練模型跟平台服務與資源,擁有優化開源預訓練模型與工具的能力將會是產生獨特競爭力的關鍵. Intel AI Everywhere 可以加速整個 AI 產品開發週期的創新, 讓你的產品與服務價值最大化, 並且可以方便的在受到資安保護且符合法規的條件下布署到 cloud 以及 edge. 透過最熱門的 stable diffusion 以及 llama2 等生成式 AI 應用實例的方享, 探討 Intel 軟硬體架構的優勢如何讓大家掌握生成式 AI 的機遇.
more
Barry Lam

Barry Lam Prompting The Future: Transforming Medicine through Generative AI

Medicine, a discipline rooted in centuries of knowledge and practice, is now at the brink of one of its most transformative phases, spurred by the rise of generative AI. This speech delves into this impending revolution, shedding light on how artificial intelligence is poised to reshape our healthcare paradigms from the perspective of Quanta Computer, a leading AI platform provider for smart medicine and digital health. From AI-driven diagnostics that predict with unprecedented accuracy to personalized treatments tailored to individual genetic imprints, we are witnessing the dawn of a new era. This discourse not only celebrates the remarkable innovations that AI promises but also engages in a critical examination of the challenges and ethical dilemmas it presents. As we navigate this uncharted territory, the speech emphasizes the need for synergy between human insight and machine intelligence, ensuring that the future of medicine is not just technologically advanced but also
more
Ed H. Chi

Ed H. Chi The LLM Revolution: Implications from Chatbots and Tool-use to Reasoning

The LLM (Large Language Model) Revolution: Implications from Chatbots and Tool-use to Reasoning Deep learning is a shock to our field in many ways, yet still many of us were surprised at the incredible performance of Large Language Models (LLMs). LLM uses new deep learning techniques with massively large data sets to understand, predict, summarize, and generate new content. LLMs like ChatGPT and Bard have seen a dramatic increase in their capabilities---generating text that is nearly indistinguishable from human-written text, translating languages with amazing accuracy, and answering your questions in an informative way. This has led to a number of exciting research directions for chatbots, tool-use, and reasoning: - Chatbots: LLM chatbots that are more engaging and informative than traditional chatbots. First, LLMs can understand the context of a conversation better than ever before, allowing them to provide more relevant and helpful responses. Second, LLMs enable more engaging conversations than traditional chatbots, because they can understand the nuances of human language and respond in a more natural way. For example, LLMs can make jokes, ask questions, and provide feedback. Finally, because LLM chatbots can hold conversations on a wide range of topics, they can eventually learn and adapt to the user's individual preferences. - Tool-use, Retrieval Augmentation and Multi-modality: LLMs are also being used to create tools that help us with everyday tasks. For example, LLMs can be used to generate code, write emails, and even create presentations. Beyond human-like responses in Chatbots, later LLM innovators realized LLM’s ability to incorporate tool-use, including calling search and recommendation engines, which means that they could effectively become human assistants in synthesizing summaries from web search and recommendation results. Tool-use integration have also enabled multimodal capabilities, which means that the chatbot can produce text, speech, images, and video. - Reasoning: LLMs are also being used to develop new AI systems that can reason and solve problems. Using Chain-of-Thought approaches, we have shown LLM's ability to break down problems, and then use logical reasoning to solve each of these smaller problems, and then combine the solutions to reach the final answer. LLMs can answer common-sense questions by using their knowledge of the world to reason about the problem, and then use their language skills to generate text that is both creative and informative. In this talk, I will cover recent advances in these 3 major areas, attempting to draw connections between them, and paint a picture of where major advances might still come from. While the LLM revolution is still in its early stages, it has the potential to revolutionize the way we interact with AI, and make a significant impact on our lives.
more
ST Liew

ST Liew The Future of AI is on-device

Qualcomm has been cultivating AI technology for more than ten years and foresees that AI applications bring unlimited possibilities with innovation to every industry. As generative AI adoption grows at record-setting speeds and drives higher demand for compute, we believe the future of AI is on-device. In this session, you will see how we fuel on-device AI to enable generative AI to scale with more diversified fields, and what benefits will bring to enhance the user experience through AI.
more
Peter Wu

Peter Wu Democratizing LLM to Empower Generative AI Innovations

Generative AI is sparking a revolutionary shift in how humanity harnesses information technology, triggering a wave of innovation. Its applications are diversifying across various sectors, promising new opportunities. Generative AI is a knowledge-based technology, and currently, many countries are actively crafting legislation to regulate the risks associated with its development and use. To manage these risks, we rely on democratizing LLM AI technology, open-source models, and data governance, especially in critical areas like healthcare. Taiwan boasts a solid industrial foundation for promoting democratizing AI, with strengths in computing power, semiconductor manufacturing, systems, and IT services. With government support, Taiwan aims to emerge as a leader in both Generative AI and Trusted AI.
more
Da-shan Shiu

Da-shan Shiu Case Studies of Applying Generative AI Concepts to Low Resource Domains

Generative AI has achieved significant advances in a very short amount of time, in areas of language, vision, coding, speech and audio. In this talk we share two cases in which generative AI are applied to two low resource scenarios. In the first case, we have some very precious samples of cellular channel responses from real-world measurement. We would like to draw synthetic channel responses from the same distribution. In the second case, we are asked to customize an automatic speech recognition system to a use doamin, for which we do not have any audio-transcript pairs. We share our experience in how to tackle such low or even no resource challenges.
more
Wenping Shen

Wenping Shen Fine-tuning LLMs for Your Enterprise Applications

As the field of LLM research continues to rapidly evolve, enterprises must constantly "Fine-Tune" their LLMs to leverage the benefits of SOTA models. Here, we explore the practical reasons for fine-tuning LLMs for enterprise applications and share strategies for meeting these challenges head-on.
more
Thomas Scialom

Thomas Scialom The AI Revolution

The rapid evolution of artificial intelligence is ushering in a new era, where large language models are outpacing human experts in fields as diverse as law and medicine. Comprehending this trajectory is essential for gauging the technology's future implications. In this keynote, Dr. Scialom offers a unique vantage point on the exponential rise and unfolding direction of AI. Through the lens of his own career—from the nascent stages of language models that could barely construct a sentence, to groundbreaking creations like his recent Llama 2—Thomas Scialom will share his personal perspective into the expanding competencies and complexities of Generative AI and large language models.
more
頁數: 1 2 3 4 - 每頁 20 筆