| Multimedia Evaluation Benchmark (MediaEval) MediaEval 2020 Registration Now Open https://multimediaeval.github.io *******************************************************
The Multimedia Evaluation Benchmark (MediaEval) offers challenges in the form of shared tasks. The goal of MediaEval is to develop and evaluate new algorithms and technologies for multimedia retrieval, analysis and exploration. MediaEval tasks are innovative, involving multiple modalities, (e.g., images, video, music, user interaction data, sensor data, lifelogging data) and focusing on the human and social aspects of multimedia. Our larger aim is to promote reproducible research that makes multimedia a positive force for society.
MediaEval 2020 Tasks:
Emotion and theme recognition in music Emotional Mario: Believable AI agents in video games FakeNews detection on social media Flood-related multimedia: Analyzing social media on natural disasters Insight for wellbeing: Multimodal personal health lifelog data analysis Medico medical multimedia NewsImages: The role of images in online news No-audio multimodal speech detection Pixel privacy: Quality camouflage for social images Predicting media memorability Scene change: Fun faux photos Sports video classification
For details of the tasks and information on how to register visit: https://multimediaeval.github.io/editions/2020/
Tasks will start to release data at the end of July and submissions will be due 31 October. The MediaEval 2020 workshop will be an online event. It will be held at the beginning of December, exact dates to be announced. For more information see https://multimediaeval.github.io or contact Martha Larson m.larson at cs.ru.nl |