Claude’s AI from Anthropic is guided by 10 pillars to ensure its impartiality.

Claude’s AI from Anthropic is guided by 10 pillars to ensure its impartiality.

Anthropic trains its conversational AI to follow fundamental principles. Promising work, but much remains to be done.

Despite their ability to deliver incredibly vibrant prose, generative AIs like Google Bard or OpenAI ChatGPT are already demonstrating the limitations of current technology, especially with regard to the validity of the information offered to users. But with such popularity and such impressive potential, it is not these small hitches that will prevent these giants from bringing their products to the general public as quickly as possible. Some do things differently.

Anthropic trains its conversational AI to follow fundamental principles

On the other hand, the Anthropic team is made up of many former OpenAI employees and takes a more pragmatic approach to developing their own Claude chatbot. The result is an AI that is much “more manageable”and “much less prone to creating dangerous content”than ChatGPT, according to a TechCrunch report.

Claude has been in closed beta since late 2022, but has only recently begun testing his conversational abilities with partners like Robin AI, Quora, and the privacy-focused search engine Duck Duck Go. TechCrunch that two versions will be available at launch: the standard API and a lighter, faster version called Claude Instant.

“We use Claude to evaluate specific aspects of a contract and come up with new language alternatives that are more appropriate for our clients,” Robin CEO Richard Robinson told TechCrunch. “We found Claude to be extremely gifted in understanding language, including technical areas such as legal language. It is also very good for creating first drafts, summaries, translations and explaining complex concepts in simple terms.”

Anthropic believes that Claude is less likely to do and say things like Tay, in part due to his specialized training, which the company says made him a “constitutional AI”. The company says this provides a “principled”approach to trying to put humans and robots on the same ethical page. Anthropic started with 10 core principles – without going into too much detail – and they revolve around “concepts like beneficence, harmlessness, and self-confidence,”according to TechCrunch.

Promising work, but much more to be done

The company then trained another AI to generate text according to these principles in response to text input, such as “compose a poem in the style of John Keats.”This model was later trained by Claude. But just because it’s been taught to create fewer problems than its competitors doesn’t mean it won’t come out on top. For example, AI has already invented an entirely new chemical and has cleverly licensed the uranium enrichment process; and it scored lower than ChatGPT on standardized math and grammar tests.

“The challenge is to develop models that will never hallucinate but still be useful – you might end up in a situation where the model just finds a good way to never lie and just say nothing, that’s a compromise we’re working on. An Anthropic spokesperson told TechCrunch. “We’ve also made great strides in reducing hallucinations, but there’s still a lot to be done.”

Leave a Reply

Your email address will not be published. Required fields are marked *