Google USM: Scaling Automatic Speech Recognition Beyond 100 Languages
· 9 min read
Google research related posts
View All TagsLaMDA is a family of Transformer- based neural language models specialized for dialog, which have up to 137B parameters and are pre-trained on 1.56T words of public dialog data and web text.
The first challenge, safety, involves ensuring that the model’s responses are consistent with a set of human values, such as preventing harmful suggestions and unfair bias.