Near Protocol has announced plans to build the world’s largest open-source AI model. The platform announced the initiative on the opening day of the Redacted conference in Bangkok, Thailand. The initiative, a 1.4 trillion parameter model, will be larger than Meta’s open-source model, Llama. Near Protocol noted that the initiative will involve crowdsourced research from multiple researchers and contributors to the new Near AI hub. Contributors are reportedly on board to train a smaller 500 million parameter model starting November 10.
Near Protocol announced that the project will continue to grow and span seven models. It also noted that only the best researchers and contributors will be involved in each stage and model. Near Protocol intends to use an encrypted trusted execution environment to reward contributors while encouraging continuous updates throughout the process. Illia POLOSUKHIN, co-founder of Near Protocol, mentioned at an event in Bangkok that the company plans to fund expensive training through a token sale. He noted that the model would cost around $160 million, which is a large sum, but does not mean that it cannot be raised in the crypto market.
The company needs to invest as many resources as possible into the project to realize its dream. For example, a lot of GPUs need to be accumulated on-site. However, using a decentralized network for computing requires technology that does not currently exist. The required distributed training methodology also requires the use of a fast connection. Polosukhin mentioned that he has not yet interacted with projects like the Alliance for Artificial Superintelligence, but would be happy if both projects follow the same path.
Image: freepik
Designed by Freepik