Opsci endorses OSI's definition of "open source AI"
We have decided to support the definition proposed by the Open Source Initiative (OSI) regarding open source artificial intelligence.
First, because it’s a topic that is very close to our hearts. We believe that the development of open-source models is crucial for this technology to be socially acceptable and have a chance to produce collectively desirable outcomes.
Second, because it strikes us as a good compromise. It sets minimum requirements for what can be called open-source AI: training and inference code under an open license, and weights under an open license. At the same time, it’s not an all-or-nothing approach that would require training data to be under an open license or freely accessible. It acknowledges that this isn’t always feasible, but it does require that sufficiently detailed information about the training data is provided so that a competent person can reproduce the training process. This seems like an acceptable, realistic, and pragmatic balance given the current state of things. Moreover, this won’t prevent, and on the contrary could encourage, the implementation of higher standards: openness is a spectrum—but it’s important that this isn’t used as an argument for rampant “open washing.”
Of course, this is just a starting point, and we will follow the ongoing discussions and debates closely.