Event
Self-Supervised Learning and Foundation Models
Unlike supervised learning with labeled data, foundation models leverage self-supervised learning techniques. The model itself creates tasks and labels from the unlabeled data.
IDA Conference, København V
Wednesday 22 January 2025
17:00 - 19:00
DKK 0.00 - DKK 100.00
English
In this talk, we will introduce the concepts and methods of self-supervised learning (SSL). We’ll explore how to build foundation models using SSL techniques and deploy them for various downstream tasks.
Our discussion will span several application domains, with a particular emphasis on speech and audio classification, including the development of audio large language models. Additionally, we will briefly touch upon the computational resources required for training AI and machine learning systems.
More information about the speaker:
Zheng-Hua Tan is a Professor of Machine Learning and Speech Processing, a Co-Head of the Centre for Acoustic Signal Processing Research (CASPR), and the Machine Learning Research Group Leader in the Department of Electronic Systems at Aalborg University, Denmark. He is a Co-Lead of Pioneer Centre for Artificial Intelligence, Denmark.
He was a Visiting Scientist/Professor at the Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT), Cambridge, USA (in 2022, 2017 and 2012).
He is a Member of Speech and Language Processing TC and the IEEE Signal Processing Society (SPS) Conferences Board. He was the elected Chair of the IEEE SPS Machine Learning for Signal Processing Technical Committee (MLSP TC) from 2021-2022 and a Member of MLSP TC from 2018-2023. He was a TPC Vice-Chair for ICASSP 2024, Seoul, Korea.
Prices
Participant, not a member of IDA | 100 kr. |
Company member | 0 kr. |
Member of organiser | 0 kr. |
Unemployed IDA member | 0 kr. |
Member | 0 kr. |
Senior member | 0 kr. |
Student member | 0 kr. |
Practical Information
Where
Kalvebod Brygge 31-33
1780 København V
When
17:00 - 19:00