Schedule

The workshop will be a full day event from 8:30 am - 05:00 pm GMT+1 (Vienna Time).Throughout the day we will be livestreaming invited talks, contributed talks, two poster sessions, and a live panel discussion.

The schedule is as follows. All times listed are in GMT+1:

Time Talk Title Speaker  
08:30 - 08:45 Opening Remarks    
08:45 - 09:10 Keynote speaker1: Accelerating Scientific Experimentation via Generative AI Aditya Grover  
09:10 - 09:15 Q&A: Aditya Grover    
09:15 - 09:25 Contributed Talk 1: Advancing Enterprise Spatio-Temporal Forecasting Applications : Data Mining Meets Instruction Tuning of Language Models for Multi-Modal Time Series Analysis in Low-Resource Settings Sagar Srinivas Sakhinana  
09:25 - 10:25 Coffee Break + Poster Session I    
10:25 - 10:35 Contributed Talk 2: Energy Minimizing-based token merging for accelerating Transformers Hoai-Chau Tran  
10:35 - 10:45 Contributed Talk 3: Dˆ2-Sparse: Navigating the low data learning regime with coupled sparse networks Diganta Misra  
10:45 - 10:55 Contributed Talk 4: Towards Bandit-based Optimization for Automated Machine Learning Amir Rezaei Balef  
10:55 - 11:25 Networking session:    
11:25 - 11:35 Contributed Talk 5: Towards Leveraging AutoML for Sustainable Deep Learning: A Multi-Objective HPO Approach on Deep Shift Neural Networks Leona Hennig  
11:35 - 11:45 Contributed Talk 6: Squeezing Lemons with Hammers: An Evaluation of AutoML and Tabular Deep Learning for Data-Scarce Classification Applications Ricardo Knauer  
11:45 - 12:45 Panel Discussion: Parameter and resource efficient development and use of Large Language Models (LLMs) Gilles Quentin, Bonaventure F. P. Dossou, David Adelani  
12:45 - 02:15 Lunch    
02:15 - 02:40 Keynote speaker2: Machine Learning and Inclusion Georgina Curto Rex  
02:40 - 02:45 Q&A: Georgina Curto Rex    
02:45 - 03:10 Keynote speake3: Frontiers in AI for Human Security Dalton Lunga  
03:10 - 03:15 Q&A:Dalton Lunga    
03:15 - 04:15 Coffee Break + Poster Session II    
04:15 - 04:25 Contributed Talk7: Sharpness-Aware Minimization (SAM) Improves Classification Accuracy of Bacterial Raman Spectral Data Enabling Portable Diagnostics Kaitlin Zareno  
04:25 - 04:35 Contributed Talk8: A Low-Resource Framework for Detection of Large Language Model Contents Linh Le  
04:35 - 04:45 Contributed Talk9: Majority or Minority: Data Imbalance Learning Method for Named Entity Recognition SOTA NEMOTO  
04:45 - 04:55 Contributed Talk10: Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models Tanmay Gautam  
04:55 - 05:00 Closing Remarks    

The Panel Discussion will focus on Parameter and resource efficient development and use of Large Language Models (LLMs)