Empowering the masses to wreak havoc (my view and wish for Large Language Models)

I've been asking myself lately. Epically after Large Language Models becoming the new hot girl in the town. - Thing are, mostly, owned and propelled by large mega-corps. The situation of the field of AI is even worse. So far only Facebook, Google and Microsoft have the capacity to build new LLMs due to how much data and how expensive it is. I don't see this trend breaking any time soon; we are heading straight towards some version of the cyberpunk dystopia. - How can I throw a wrench into the steam engine driving us towards the grim future?

From my point of view. LLMs are not just a good chat bot or a convenient tool for coding. It's the holy grail of language processing and programming languages. Remember the premise we all learned during our first coding class "translate actions into a form computers can understand". And failed attempt of COBOL, designed to look like human language. So non-scientists can easily create programs and perform daily tasks. But failed because how verbose and time consuming it is.

IDENTIFICATION DIVISION.
PROGRAM-ID. INPUT_CHECKER.
DATA DIVISION.
WORKING-STORAGE SECTION.
01 USER-INPUT PIC 9(2).
01 OUTPUT PIC 9(2).
PROCEDURE DIVISION.
MAIN-LOGIC.
    DISPLAY "Please enter a number: ".
    ACCEPT USER-INPUT.
    
    IF USER-INPUT = 1 THEN
        PERFORM PRINT-ONE-FOREVER
    ELSE
        PERFORM PRINT-ZERO-AND-EXIT
    END-IF.
    
    STOP RUN.
    
PRINT-ONE-FOREVER.
    MOVE 1 TO OUTPUT.
    PERFORM DISPLAY-OUTPUT
        UNTIL 0 = 1.
    
DISPLAY-OUTPUT.
    DISPLAY "Output: " OUTPUT.
    
PRINT-ZERO-AND-EXIT.
    MOVE 0 TO OUTPUT.
    DISPLAY "Output: " OUTPUT.
    STOP RUN.

LLM does exactly that. But actually achieving it. I asked RWKV (A LLM that I run locally) to help with some hypothetical task. It answered perfectly. It gives off the feeling of "this program is actually trying to understand what I want, not what I say", even though that's totally not the case.

Youtube video: Intro to AI Safety, Remastered

Bob: BASH command that scans a CSV of format (name, salary, month) and find the sum given the month.

Alice: Sure, here is a bash command that scans a CSV file and finds the sum given the month:

  cat file.csv | while read name salary month; do
      if (( month == $(date +%Y-%m ) )); then
          sum=$(echo "$salary + $month")
          echo "$name $sum" >> output.csv
      fi
  done

The above command will loop through each row of the CSV file and calculate the sum of salary and month for each row. It will then append the result to a new file called output.csv. The date format used in the code is $(date +%Y-%m), which is a format string that expands to the current date in YYYY-MM format.
Note that this command assumes that the input file has the following format:

  name,salary,month
  John,10,Sep
  Alice,20,Sep
  Bob,15,Sep

It also uses a temporary file called output.csv to store the results.

No one is going to be fascinated by now. ChatGPT have been around for a while. But I can't understate how powerful this sort of question and answering is going to be. Imagine you are not a software developer. You are a cook starting your own bakery. And you wish to accept preorders from the internet to expand your business. Instead of going through platforms like Uber to handle to orders for you (just for example) and taking a cut. You could setup your own website. And have a form. Yet, instead of managing Google forms and learning the technical side of spreadsheets. Just ask a LLM "Look at this table, tell me how many loafs of bread I need for preorders today". Poof, there's the output! - I believe in the short term it's more then likely that LLMs won't replace human technicians. But it will drastically reduce the barriers to enter any given field. I certainly hope this sort of power won't be solely owned by big tech. For one, that increases the barrier for everyone. And two, after the sociality have got used to using LLMs as a part of daily life. Getting cut off would be catastrophic.

I thought about "can't beat 'em, join 'em" for a while. Maybe it's the inner hacker fantasy. I can't bring myself to make the big stronger. Or try to change it from the inside. Not saying I can't try. But engineers are execution engines in corpo. Decisions are made by the C-Suite. Feeling of "we are so getting our asses kicked" just won't leave my mind.

Showering one day, I thought, what if we can make LLMs cheap? Like cheap enough for an interested individual or a group of few people to train? The quality may not be as good as expensive commercial offerings. But good enough to build upon. See, the problem we are facing now is simply everyone but the big tech have the computing power and data to train large models. But public datasets (OpenAssistant, CommonCrawl, Wikipedia...) are good enough and large enough to make "something". With enough interest, people can build special datasets for their specific need. And thus no longer depending on the AI providers.

In layman's terms, guns aren't to be afraid of if everyone have guns. Sure, someone will have bigger and better guns. But any gun, cheap or not can fire and neutralize threats when used correctly. If we give everyone the same ability as today's big tech to disrupt markets and the ease to enter any given field. I believe a period of chaos will ensue. But eventually, we will have a more fair and open market.

I realized what I want is the democratization of AI. Also we can't just have NVIDIA to be the monopoly of AI processors. That way they control everyone. In my not well thought out opinion. It translates to:

  • Run LLMs on dedicated low power neural processors
  • Multiple competitive vendors providing hardware for training and inference
  • Cloud vendors providing LLMaaS at reasonable cost
  • We find a way to delegate boring day-to-day tasks to LLMs
  • Communities building prosperously built LLMs for specific tasks

I feel that work in the following area will bring the greatest impact.

  • Frameworks that support non NVIDIA GPUs
  • Help train efficent models like LLaMA or RWKV
  • Port LLMs into ONNX so inference accelerators can execute them
  • Try training (not just find tuning) LLMs and share the process
  • A large and uncensored LLM foundation model that is in the public domain (or equivalent licenses)

That'll be what I'm going to be working on in the near term. However there's 2 valid reasons against my position.

First, my view depends of the fact that AI won't become overly smart. There's a view in the AI safety community of that the risk of advanced AI is like a nuke. Why should we distribute nukes to all people? It's more then likely, even with extreme caution, AI will end up in the wrong hand. Distributing AI in such a way will only accelerate the downfall as the only outcome is mutually assured destruction.

Secondly, the same issue of gun control arises. LLMs can not only do good. But also massive harm to the public by bad actors. Just look at the massive spam of ChatGPT content on the world wide web. Mostly from "creators" (using the word very loosely) wanting a quick buck from advertisers. However I argue, it is not a fair comparison. Ill actors already have access to LLMs. They have financial interest in the use and can pay LLM providers accordingly. Counteracting that with free cloud account ain't happening. It's possible that this ends up being a race to the bottom. Where both sides got better and better at the game. Until there's no more distinction between real human and an AI. Which gets us back to problem 1.

I don't have answers to these issues. But I still stand by my position.

Author's profile. Photo taken in VRChat by my friend Tast+
Martin Chang
Systems software, HPC, GPGPU and AI. I mostly write stupid C++ code. Sometimes does AI research. Chronic VRChat addict

I run TLGS, a major search engine on Gemini. Used by Buran by default.


  • marty1885 \at protonmail.com
  • Matrix: @clehaxze:matrix.clehaxze.tw
  • Jami: a72b62ac04a958ca57739247aa1ed4fe0d11d2df