There is a trend in the machine learning community to adopt self-supervised approaches to pre-train deep networks. Self-supervised learning utilizes proxy supervised learning tasks, for example, distinguishing parts of the input signal from distractors, or generating masked input segments conditioned on the unmasked ones, to leverage data from unlabeled corpora. These approaches make it possible to use a tremendous amount of unlabeled data available on the web to train large networks to extract high-level, informative, and compact features from raw inputs. Then for downstream applications, a simple task-specific model is added on top of the self-supervised model. The self-supervised model can serve as a feature extractor or be fine-tuned with the task-specific model.
Recently self-supervised approaches for speech and audio processing are gaining attention. This special issue will bring the works on self-supervision for the field of speech and audio processing. Alongside research work on new self-supervised methods, data, applications, and results, this special issue will call for novel work on understanding, analyzing, and comparing different self-supervision approaches for speech and audio processing. We welcome the submission of the work on, but not limited to, the following research directions:
Topics of interest in this special issue include (but are not limited to):
- New self-supervised proxy tasks for speech and audio processing.
- New approaches to use self-supervised models in speech and audio processing tasks.
- Compare the similarities and differences of self-supervised learning approaches.
- Theoretical or empirical studies on understanding why self-supervision methods work.
- Exploring the limits of self-supervised learning approaches for speech and audio processing, for example, universal pre-trained model for multiple downstream tasks, environment conditions, or languages.
- Self-supervised learning approaches involving the interaction of speech and other modalities.
- Comparison or integration of self-supervised learning methods and other semi-supervised and transfer learning methods.
Submission Guidelines
Prospective authors should visit the IEEE JSTSP webpages for information on paper submission. Manuscripts should be submitted using the Manuscript Central system according to the schedule below. Manuscripts will be peer-reviewed according to the standard IEEE process.
|