More precisely, we use activations from the last layer of neural network as speaker embeddings. We aggregate the sigmoid outputs by summing all outputs class-wise over the whole audio excerpt to obtain a total amount of activation for each entry and then normalizing the values by dividing them with the maximum value among classes. The analysis of those embeddings in time allows the system to detect speaker change and identify the newly appearing speakers by comparing the extracted and normalized embedding with those previously seen. If the cosine similarity metric between the embeddings is higher than a threshold, fixed at 0.4 after a set of preliminary experiments, the speaker is considered as new. Otherwise, we map its identity to the one corresponding to the nearest embedding.
Hi @cyrta, Can you please elaborate this paragraph in the paper. This is my understanding please correct me if I am wrong.
we use activations from the last layer of neural network as speaker embeddings. This is weird because the last layer would be softmax layer according to the loss function of the network. Or you meant to say that there is a dense layer with sigmoid activation before softmax layer and its activation are used as speaker embeddings. What is the size of the embeddings that are being extracted ?
- Then speaker embeddings are summed over the entire audio class-wise and normalized by dividing with maximum value among all the classes. I'm not sure after this. Now if the distance between any extracted embeddings and the previously obtained normalized embeddings is greater than 0.4 it is treated as new speaker otherwise we map it to the nearest embedding (say left) speaker ( or the most similar embedding previously seen ?).
- Also, in the paper, there is no discussion about how silent zones are treated whether any voice activity detector is employed etc. as this is part of Diarization Error Rate.
Thanks.
Hi @cyrta, Can you please elaborate this paragraph in the paper. This is my understanding please correct me if I am wrong.
we use activations from the last layer of neural network as speaker embeddings. This is weird because the last layer would be softmax layer according to the loss function of the network. Or you meant to say that there is a dense layer with sigmoid activation before softmax layer and its activation are used as speaker embeddings. What is the size of the embeddings that are being extracted ?Thanks.