Open source’s new mission: To boldly go where no software has gone before and expose where the data goes
An interesting read (linked below) in that often the code used to run an application is available as open source, but not always understood by users. The thing that users worry about more though is where their data actually is, and where is it flowing (in plain simple to understand language and pictures).
It’s true that many FOSS apps are completely open, and the data is stored locally only, or sometime synced via a 3rd party service that the user chooses themselves. But Big Tech has also found ways of gaming the FOSS environment to the point where their solution is FOSS in name, but the server/cloud side is not open sourced, and the users’ data is stored in that cloud service somewhere without the user really knowing where it is, who/what else has access to it, etc.
It is being proposed that a truly open system should also extend to the user’s data, and not need a programmer to try figure out what is happening. In most cases the client-side open source app is not going to give any ideas as to what is happening with data on the server/cloud side.
Usually, such data also does not conform to open data standards, and even when it does, it certainly can’t easily be re-used elsewhere by the user. Yes, you could export your Google+ user data from Google, but there was nothing you really could do with that data.
Contrast this with a simple example of say a cloud hosted RSS reader service. You can export the OPML file and import that into any other RSS reader as the data conforms to an open standard.
To be truly FOSS the whole system needs to be able to be self-hosted by the user, and the location and the access to the data needs to be understood by users. The user’s experience is the total sum of the application software, the server/cloud side software, as well as their data.
See https://www.theregister.com/2024/01/08/open_sources_new_mission/
Comments