One of my core principle to IT: Try to use different stuff. Don't be afraid of the weird things. In a lot of cases, the weirdness is your friend. Usually the weird is a deisgn or feature to solve a need that is not met by the mainstream. Even if not, there's a lot to learn from the weird.
This is why I write web services in C++, investigate GNUnet, use a TLS library that's not OpenSSL, maintains OpenBSD support for libraries, and so on. Maybe Gemini to a certain degree. See, I run into problems. A lot of them. I see certificates breaking, connection reset and even code that just don't work on certain OS-es that is not used widely. But I also learned a lot from the process. The fact that OpenSSL is not fully compliant to the RFC, that opening a file is not always possible even if permission is correct.
Learning from the old
A lot of mainstream tech stems from old, pre-existing stuff. Or at least the ideas are similar and can be understanded easier there. Microservices is a good example. It's not about Kubernetes, it's about the idea of having small, independent services. And DBus implemented that 2006. Don't know which daemon is responsible for showing notifications? The notification daemon will create a method on DBus and everyone pokes it. Docker and the entire LXC system being too complicated? Learn chroot and FreeBSD jails first.
Seriously, don't just shoot for the moon. Learn the basics. Learn Newtonian dynamics. The simpler the more versatile.
It's also not just about the similarity between old and new. GNUnet implemented distributed file share way before IPFS came alone. Reading their code and whitepaper teaches what's the different challenge they faced and problems changed. GNUnet is modeled after the Internet, torrents and other darknet. It tries to make documents accessible no matter what. Even under government censorship. IPFS on the otherhand is more of a distributed cache. But with very long TTL. It can be used to evade censorship, but that's not built into the code. It's just how people using it.
It's also about doing what mainstream cannot.
Taking GNUnet for example again. It may not have the best feature set for mass adoption. But tt's the only system that can do decentralized traffic routing, storage and name resolution at the same time. Not
libp2p nor Handshake. It unlocks a lot of possibilities. Fully decentralized chat, censorship evasion network, underground IOT messaging, maybe? Case and point, it's a unique system that can do things that no other system can do. And it's not just GNUnet. Many small projects have unique and useful combination of features.
A lot of times, the "new problems" are actually solved ones in the past. C++ have it's roll. It's the only language that can easily integrate with C while have the ability to be very high level. - I should take some time about how C++ is addressing memory safety. But be short, I haven't run into memory safety issues in C++ in a long time. You don't need Rust to solve it. But Rust does have it's innovaions - People started to use slower languages like Python as CPUs become faster. Thus IO replaces the CPU as the bottleneck. But when the problem scales larger and larger. CPU becomes the issue again. Once I had to injest 20TB of data and run some analysis on it. The Python version is just not an option. Pandas is show, running the actual graph calculation is slow. Finishing 1 iteration out of 300 in like an hour. But a rewritten C++ one with ROOT finishes the enture calculation in half an hour.
Too much rambling. I'll stop here. There's too much AI development these weeks. I'm gonna go back and read some papers.
Systems software, HPC, GPGPU and AI. I mostly write stupid C++ code. Sometimes does AI research. Chronic VRChat addict
- marty1885 \at protonmail.com
- Matrix: @clehaxze:matrix.clehaxze.tw
- Jami: a72b62ac04a958ca57739247aa1ed4fe0d11d2df