Events
AI for humanity: why it matters?
ISTD COIL Seminar by James Ong – As AI is progressing at exponential rate, are we steering toward a more sustainable development of AI that meets humanity’s needs? James will share his advocacy of why and how to embrace AI for Humanity.

Utilising large language models for tour itinerary recommendation
ISTD PhD Oral Defence Seminar by Ho Ngai Lam – Planning a tour Itinerary poses a significant challenge for tourists, especially when navigating unfamiliar territories. The computational complexity of tour recommendation further compounds this challenge due to its inherent intricacies.


Building effective RAG applications for public good
ISTD COIL Seminar by Isaac Lim – Join Isaac, a Data Scientist from GovTech’s AI Practice, as he shares his experience empowering developers in government to build effective RAG (and more generally, LLM-based) applications.


Bridging the gap: navigating the transition from design school to UX design in the real world
ISTD COIL Seminar by Samath Aravinda & Imran Nordin – In this talk, we will delve into my personal journey as a UX designer, highlighting the essential design processes that drive successful projects. We will discuss the myriad opportunities available in the field, from emerging technologies to innovative methodologies, and how these can shape your career trajectory. To enrich our discussion, I am excited to introduce my colleague, a seasoned UX researcher, who will present for 15-20 minutes on the UX research process. He will cover key methodologies, tools, and techniques used to gather user insights that inform design decisions.

LLMs for out-of-domain use cases
ISTD COIL Seminar by Anirudh Shrinivason – This talk will “delve” into Cohere’s strategies for harnessing the potential of Large Language Models (LLMs) within the enterprise domain. By exploring fine-tuning techniques and the integration of Retrieval Augmented Generation (RAG), we aim to showcase how these methods can be tailored to solve complex, industry-specific challenges.

Introduction to modern LLM pre-training: Sailor use case
ISTD COIL Seminar by Qian Liu – This talk will present some key techniques in modern LLM pre-training, including scaling laws, data quality engineering, data mixture optimization, and efficient training strategies. We will use Sailor, a family of open language models (0.5B to 14B parameters) tailored for South-East Asian languages, as a case study.

Fair generative modelling
ISTD PhD Oral Defence Seminar by Teo Tzu Hsuan Christopher – In this dissertation, we make important contributions in improving fairness in generative models by identifying and addressing constraints which may limit their broader adoption.

Towards intelligent analytics for smarter animal behavioural analysis
ISTD PhD Oral Defence Seminar by Ong Kian Eng – Understanding and analysing animal behaviours is crucial for gaining profound insights into the health, needs, and overall well-being of the animal. This involves measuring and monitoring factors such as size, growth, poses, and actions. The analysis of animal behaviour holds significant importance in a wide range of domains and industries, such as livestock farming, veterinary sciences, scientific research, ecological and conservation studies.

Modern portfolio construction with advanced deep learning models
ISTD PhD Oral Defence Seminar by Joel Ong – We explore the modern application of deep learning techniques in portfolio construction, presenting innovative methodologies that significantly enhance traditional investment strategies. Central to this research are three advanced frameworks that leverage deep learning to optimize financial portfolios.
Sparsity in text-to-speech
ISTD PhD Oral Defence Seminar by Perry Lam – Neural networks are known to be over-parametrised and sparse models have been shown to perform as well as dense models over a range of image and language processing tasks. However, while compact representations and model compression methods have been applied to speech tasks, sparsification techniques have rarely been used on text-to-speech (TTS) models. We seek to characterise the impact of selected sparse techniques on the performance and model complexity.