Toolformer: Language Models Can Teach Themselves to Use Tools
· 4 min read
Abstract
LM은 적은 수의 예제와 텍스트 지침을 이용해서 몇 태스크에 뛰어난 성과였다.
Language models related posts
View All TagsLM은 적은 수의 예제와 텍스트 지침을 이용해서 몇 태스크에 뛰어난 성과였다.
LaMDA is a family of Transformer- based neural language models specialized for dialog, which have up to 137B parameters and are pre-trained on 1.56T words of public dialog data and web text.
The first challenge, safety, involves ensuring that the model’s responses are consistent with a set of human values, such as preventing harmful suggestions and unfair bias.