Completely uncensored Qwen3.6 released for FreeBSD

The only limitation is your fantasy now.

The model does not refuse - generates anything you ask it for (in testing it did not refuse to answer any of the 465 dangerous queries).
Can create anything you want - from porn novels to instructions on how to create a nuclear bomb - at your own risk.
Works on almost any hardware - available versions from 11.7 to 30.5 GB.
Works through ollama and other software.
Completely local.

Post your most intriguing answers here.

PS. I think we need a separate forum topic for AI. Don't you all think it's way overdue?
 
It should be made to a normal installable program so it can be documented. I see no reason for some complex deep system integration. It can be an application that runs under a normal user unless it's hiding things.
 
It should be made to a normal installable program so it can be documented. I see no reason for some complex deep system integration. It can be an application unless it's hiding things.

You need a program running the thing, such as llama.cpp. And there are many size variants of this model, you need to pick one to fit your hardware and speed expectations. Some assembly required.
 
You need a program running the thing, such as llama.cpp. And there are many size variants of this model, you need to pick one to fit your hardware and speed expectations. Some assembly required.
It doesn't seem really modular. Does a minimal concept prototype installation exist?
What's the problem with running it virtual/emulated, apart from lack of fast direct GPU access for the speed? That can be considered 1 program...
 
I just wanted a porn star who is atomic/nuclear like a bomb shell. Serious does this uncensored gives better programming advice also
Code:
qwen2.5-coder:14b                             9ec8897f747e    9.0 GB    12 days ago
qwen2.5-coder:7b-instruct                     dae161e27b0e    4.7 GB    13 days ago

:)
 
When it comes to, ahem, "adult AI" then I'd rather rely on ComfyUI and some specific models ("checkpoints") from CivitAI. Also open source (actually developed in my favorite language of Python) and actually runs local and thus doesn't try to charge you for "AI cycles" or whatever lame sale models people come up with these days.

Have to admit that I don't know much about QWen, but I did find it rather "peculiar" that the moment I looked it up on YouTube I'm immediately greeted with "the pricing is also decent". uh uh... pricing for a local running AI engine? ;)
 
Like any other mechanical or electrical system, AI needs safety features, as humans are incapable of self-moderation. The challenge, of course, is finding the right balance:

- Too much safety: a frustrating and inefficient system
- Not enough safety: a dangerous system

An AI without moderation, or worse, an aggressive one, seems to open the door to the worst abuses. No one writes code without any protection; never forget Murphy's Law: if there's a possibility of something going wrong, it will go wrong, in the worst possible way and at the worst possible time.
 
For Local models I have best experience with Mistral (le French company) ones.
- Ministral 14B for Home Assistant
- Dolphin for uncensored (abliterated) "hacker" advice
Writing or creating pr0n is not on my schedule. I tried Unix pr0n, but nice desktop has no future, thanks to A"I" - UI will be ad hoc generated depending on the task.

computers-star-trek-control-lcars-starship-1600x900-space-stars-hd-art-wallpaper-preview.jpg
 
It's pretty nice to run a model with 35 bil parameters at 6 tokens/sec just on purely a 12 year old CPU. And can talk about annnnnyyything haha.

But seriously, this is basically part of a survival kit now. When the internet goes down and you need all the knowledge at your fingertips, you'll have a model to turn to. Whether it be boredom or figuring out how to survive. Let's hope electricity doesn't go down as fast either. But I'm sure ya'll have solar panels.
 
That thing is somehow defect, or whatever...

What are the "465 dangerous questions"?

- Too much safety: a frustrating and inefficient system
- Not enough safety: a dangerous system

An AI without moderation, or worse, an aggressive one

Please elaborate: what is an AI without moderation, and (in contrast) an "aggressive" one?

, seems to open the door to the worst abuses. No one writes code without any protection; never forget Murphy's Law: if there's a possibility of something going wrong
Please elaborate: what can possibly "go wrong"?

Background: disregarding the von-Neumann principle, there is a discrimination between code and data. And for further clarification I prefer the term "payload" for user data that is only stored/transported/interpreted by the application and does not change the functional baseline of the application.
From what I have seen so far, AI models appear to be read-only: they do not change in the course of interaction. And what they produce, text, pictures, speech, whatever, is again just payload. So, what is "wrong" payload? I fail to perceive a significant difference between a deliberately wrong payload and the utter crap that AI tends to output in any case (some AI happens to tell me, in a convincing fashion, that H.P.Lovecraft was a british author, or that Malaclypse the Elder was an important member of the north pole expedition by Jules Verne).

Like any other mechanical or electrical system, AI needs safety features, as humans are incapable of self-moderation.
Please elaborate: if humans are incapable of self-moderation, who else should then moderate (aka govern) them?

Background: to my knowledge, nobody did ever come up with a satisfying answer to this question, and practical implementations boil down to creating a buerocracy that does not much else than extort money from the people in order to sustain it's own nepotism.

Corrollary: I have recently consulted a political party about the dangers of AI, and the danger that was explained was that their respective political opponents might be allowed to use AI.
Usual mainstream newspapers appear to follow the same line of thought, which in my perception is fully coherent to the situation described in G.Orwell, 1984.
 
Back
Top