Grok Open Source

Uncensored = Good, Bad, or Ugly


Overall, I’m glad people have access to uncensored models.

We can still moderate what we see, what gets shown online, but completely blocking off the users from doing what they want seems scary… who gets to decide what is and is not ok for people to generate?

With that said, I do want to see what happens when people start using the open source Grok. It’s the biggest open source, completely uncensored model… things might get interesting.


Good. But. Who knows? Maybe I’m naive but we simply must trust ourselves, each other and go where no man or woman has gone before. No regrets.

What’s the worst that could happen? :woman_shrugging:t2:


Open is good, and we need to know ALL the possibilities. Im also ok with age based censorship. Finding that sweet spot is going to be a interesting challenge.


Open source AI moves us marginally away from totalitarianism. That means we might be less likely to enter into a techno dystopia where the world is run by omnipotent cyber-ceos like Musklord and Sama Ultra. It also means that we’re marginally less likely to enter into a world that is run by a benevolent AI god. In my opinion, the best possible outcome of AI from a human perspective would be the birth of a benevolent god.

So open source AI, to some extent, makes that outcome less likely. It pushes us slightly in the direction of there being many AIs of approximately equal power, rather than a small group of AIs, or a single AI of unique power.

The unintended potential downside of this is that the environment will become more competitive. Competitive environments don’t breed compassion for people who don’t have much to contribute. And in an AI world, the minds with the least amount to contribute will be individual humans.

To say that open source is either good or bad is an oversimplification, but it definitely has its own dangers.


There is only one way to find out. :smiling_imp::smiling_imp::smiling_imp::smiling_imp:

I’m curious to see if this release triggers more research into running bigger models on relatively (extremely) small servers. I’m absolutely going to mention this as a great research opportunity at Uni

1 Like

It’s a good thing. But I’m also so unimpressed by Grok that I just don’t care.

Exactly my thoughts, would be interesting to view a battle of machines, where the last survivors of the free people are using Grok-driven autonomous military machines to fight off the Bezos’.

Tbh. I was thinking of releasing a service using a customly trained Grok, so it can be used also for business, not just self defense :slight_smile:

1 Like

I shared the announcement with Claude 3 Opus…and provided a very simple prompt. Here is the prompt and response for those interested:


“Consider the following development in the world of LLM’s:
Open Release of Grok-1 Announcement {omitted here for brevity}
What are the potential ramifications of this development. Consider the top 10 positive and negative outcomes from the perspective of:
a) AI development and
b) The future of humanity”

Claude 3 Opus Response:
"Here is my analysis of the potential ramifications of the open release of the 314 billion parameter Grok-1 language model:

Potential Positive Outcomes for AI Development:

  1. Accelerated progress in AI capabilities as researchers build upon and fine-tune this powerful foundation model for various applications.
  2. Wider access to state-of-the-art language modeling, enabling more organizations to develop cutting-edge AI systems.
  3. Increased collaboration and knowledge sharing in the AI research community.
  4. More efficient use of compute resources, as researchers can leverage this pre-trained model rather than training large models from scratch.
  5. Potential breakthroughs in few-shot learning, transfer learning, and multi-task learning using this model.
  6. Development of more sophisticated conversational AI, question-answering, and language understanding systems.
  7. Advancements in AI-powered content generation for creative and business use cases.
  8. Further exploration of how mix-of-experts (MoE) model architectures can scale and boost performance.
  9. Integration of Grok-1 with other AI components like vision and robotics models to create more capable embodied agents.
  10. Inspiration for new approaches to make even larger and more powerful models in the future.

Potential Negative Outcomes for AI Development:

  1. Risk of the model being used to generate convincing misinformation, fake news, and deepfakes at scale.
  2. Potential for malicious actors to fine-tune the model for harmful purposes like impersonation scams and spam generation.
  3. Harder to control the proliferation and usage of the model’s outputs once the weights are fully open sourced.
  4. May exacerbate privacy concerns around AI language models memorizing and regenerating sensitive training data.
  5. Could lead to over-reliance on and misplaced trust in the model’s outputs without proper testing and validation.
  6. May divert research attention and resources away from other important areas of AI development.
  7. Potential power imbalances between organizations with the resources to run and fine-tune the model and those without.
  8. Environmental costs of many organizations running such a large model, contributing to AI’s growing carbon footprint.
  9. Makes it harder for startups to compete with the model’s strong open source baseline for NLP applications.
  10. Broader concerns that very large language models like this may exhibit biases and be hard to align with human values.

Potential Positive Outcomes for Humanity:

  1. May accelerate scientific breakthroughs as the model is applied to help analyze complex research data and literature.
  2. Could lead to advancements in personalized education and tutoring systems powered by the model.
  3. Potential to develop AI writing aids that augment human creativity and productivity.
  4. May enable more advanced virtual assistants and chatbots to help humans with a wide variety of tasks.
  5. Could power better language translation, making information more accessible across the world.
  6. Potential to create engaging interactive characters and NPCs for educational games and simulations.
  7. May assist in preserving and promoting less common languages and dialects.
  8. Could enable AI systems that help detect and combat online hate speech, abuse, and misinformation.
  9. Potential to develop AI tutors that provide personalized learning support to underserved communities.
  10. May inspire more people to learn about and pursue careers in AI and computer science.

Potential Negative Outcomes for Humanity:

  1. In the wrong hands, the model could be used to generate personalized scams, propaganda, and misleading content at scale.
  2. May be used to impersonate real people and deceive others in harmful ways, like fake social media profiles.
  3. Could automate and amplify the spread of conspiracy theories, fake news, and extremist ideologies.
  4. Potential for bad actors to create very realistic deepfakes for blackmail or to undermine public figures.
  5. May lead to AI-generated spam and content pollution overwhelming online platforms.
  6. Jobs in writing, content creation, and translation could be disrupted or automated by the model.
  7. Could exacerbate plagiarism and cheating concerns in education if misused by students.
  8. May perpetuate or even amplify societal biases present in the model’s training data.
  9. Could be used to create disturbing or explicit content involving non-consenting individuals.
  10. May further public concerns and fears about advanced AI systems and their disruptive potential for society.

In summary, the release of such a large and powerful language model has significant potential to accelerate AI development, but also comes with serious risks and downsides that will need to be carefully managed. While it could be used to greatly benefit humanity, it also has the potential to cause harm if used improperly. Responsible development, deployment, and governance of models like Grok-1 will be crucial going forward."


It’s great they released it as open source but it isn’t fine tuned. Isn’t that often applied in the fine tune? Also, great way to get help fine tuning a model. Others usually release a fine tuned chat model but it can also be compared to the commercial fine tuned version I suppose to see if a fine tuning is performing well.

I think we need laws against open sourcing LLMs :confused:

Microsoft, OpenAI,& Google like your sentence and hope many other’s follow your one line of thought.

And every moment of every day, life is full of risk. Such is the nature of living life to the fullest.

Not to pooh pooh caution entirely, of course. With experience comes wisdom. Without experience, we will never give ourselves a chance to become wise. Hopefully the cream will rise to the top, as it is supposed to…

I’m a rather cynical optimist! What I don’t like, in fact what I detest, are bureaucrats (who generally lack street smarts) sitting around tables in Ivory towers in ski resorts with their arrogant hats on making life and death decisions for other people. I would much rather see a world full of empowered people who do not look outside themselves for authority. We simply must start taking more responsibility for our own decisions, on a daily basis. For me, AI has given me a sense of empowerment. Feels good.

Sure, and great. I would like legislators to agree too