
Browse by

Grant Funded Researcher (B) - Multimodal Models
The University of Adelaide(Level B) $102,952 to $121,779 per annum plus an employer contribution of up to 17% superannuation applies. There is one 2-year fixed-term position available, with the possibility of an extension, subject to project end-date. Flexible work arrangements can be negotiated with the right candidate.
Be part of the Australian Institute for Machine Learning – the largest computer vision and machine learning research group in Australia – and contribute to world-leading research projects at the Centre for Augmented Reasoning. The postdoctoral researcher for Multimodal models – metric-based and feature-based membership inference will be supervised by Associate Professor Qi Wu. This research topic will be explored in collaboration with CSIRO’s Data61 group.
As data exists in different modalities in the real world, viable interactions and combinations among multimodal data feature the creation and discernment of multimodal information in deep learning research. However, pre-trained large multimodal models can often carry more information than single modal models and they are usually applied in sensitive scenarios such as medical report generation and disease identification. Thus, multimodal models may lead to severe data privacy problems.
This project studies the data security and privacy issues of large-scale pre-trained multimodal models through the lens of membership inference attack with the aim of determining whether a data record belongs to the training dataset of a model or not. This postdoctoral role will investigate: 1. The problem of the input and output of the multimodal models being in different modalities, without knowing confidence scores (posterior probabilities) corresponding to the output.
This will be examined through the attack methods of: metric-based membership inference, utilizing similarity metrics in multimodal models to infer target data membership; feature-based membership inference, using a pretrained shadow multimodal feature extractor to discriminate the input and output data. 2. Evaluation of multiple defense mechanisms against attacks.
This exciting role will contribute to the Centre for Augmented Reasoning’s objective to build world-class research capability in machine learning while demonstrating the potential and impact of this knowledge for industries in Australia. To be successful you will need: A PhD in computer science or related discipline, or equivalent industry experience Experience and demonstrable expert knowledge in one or more of the following areas: computer vision, machine learning, and vision and language problems, such as image captioning, visual question answering, and visual dialog Strong programming skills, including expertise in relevant languages (e.g.
Python, C++) and libraries (e.g. NumPy, PyTorch, TensorFlow) Track record of publications in top-tier venues for machine learning, artificial intelligence, computer vision, natural language processing and/or robotics (e.
g. NeurIPS, ICLR, CVPR, IC.
...Date09 March 2023
Location -
type Full Time
Salary -
Scientific & QA Jobs