Like Ra's Naughty Forum

Full Version: Writing hypno-scripts with AI
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13
(14 Oct 2023, 16:01 )Lycradjin Wrote: [ -> ]Imagine being able to create your own hypnose script, tailored to your specific needs and preferences, with just a few prompts. And then, with the help of AI, being able to automatically convert that script into a hypnose audio file, complete with a soothing, slow-speaking voice.
That would be great. I hope the music is optional (I prefer without it) and the script is available (so speakers of other language can translate it).
I've aeady developed the idea a bit further and tested some loose components.

The idea is:

I'll use a set of hypnosis scripts and books to further train the AI (or provide it with a vector dataset).
You can compile a prompt about what the hypnosis session should focus on.
The AI can then generate a hypnosis session for you, which you can fine-tune yourself.

The output is a text, which you can then use as input for the hypnosis generator.

In the generator, I'm considering these options:
- You choose a language.
- You can select a preferred voice (male, female, deep voice, stern voice, etc.)
- You can set the speed and pitch of the voice.
- You can configure pauses, timing, and intonations of the voice.
- You can select a background sound or upload one yourself as an MP3 file.

Additional configuration options:
- Fine-tuning the volume of the voice and background.
- Play extra sounds at a specific moment (or at a specific word).
(21 Oct 2023, 10:52 )Lycradjin Wrote: [ -> ]I'll use a set of hypnosis scripts and books to further train the AI

Which model are you going to use/train?
First I'm going to create a section around the text to speech.
I have aeady run a demo that uses Google's SSML formatting. This way you can do a lot with the text. (such as pronouncing words in a different way, creating a moment of silence and playing background sounds)

(I'm about to listen to my first version of an adapted hypnosis script about a swimming slave who wears a speedo hydra suit and then transforms into a woman.)

Then I want to delve further into which AI I will use.
I'm now thinking of a Llama 2 version and one that has no restrictions so that it doesn't start to sputter if you want to incorporate something erotic into it 😊

And what we think is really cool is that I can generate two audio files so that you get slightly different audio per ear. This, I understand, gets many people into hypnosis more quickly.

My ultimate dream is to create a hypnosis file that can be listened to by several people at the same time. And that during their hypnosis they can do things to the other person who is also listening.
(23 Oct 2023, 22:31 )Lycradjin Wrote: [ -> ]My ultimate dream is to create a hypnosis file that can be listened to by several people at the same time. And that during their hypnosis they can do things to the other person who is also listening.
Yes! Speaking of egregors! Actually, should be possible. However, there must be enough energy and the tuning must be perfect (in other words, a great meditation and visualisation techniques are the must here).

Mmm... Do we need a separate thread for Chaos Magic? Yes, we do 😉
Tried to fine tune Mythalion-13B and ask it to write silly script.

Model was tuned on additional tiny dataset of 757 hypno scripts, including BS.

Config
Code:
base_model: PygmalionAI/mythalion-13b
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true

load_in_8bit: true
load_in_4bit: false
strict: false

datasets:
  - path: /root/data/dataset_train.json
    type: alpaca
dataset_prepared_path:
val_set_size: 0.01
output_dir: ./lora-out

sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true

adapter: lora
lora_model_dir:
lora_r: 256
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:

wandb_project:
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:

gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 8
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00003

train_on_inputs: true
group_by_length: false
bf16: true
fp16: false
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 100
eval_steps: 0.05
eval_table_size:
eval_table_max_new_tokens: 128
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
  bos_token: "<s>"
  eos_token: "</s>"
  unk_token: "<unk>"

Some results:

'Tuned' model
Show Content

Same model, but not 'tuned'
Show Content
(02 Nov 2023, 14:44 )shinybambi Wrote: [ -> ]'Tuned' model
WOW! So eloquent! The words sound like a stream of water, no "stuck" moments. Nice one! 👍
It still insists that Bambi is a one famous deer, guess it needs a better dataset and more fine tuning... and i have no idea what i'm doing.
Same prompt after additional tuning of the same base model.
Show Content

... but this one keeps endlessly repeating itself most of the time.
(03 Nov 2023, 09:37 )shinybambi Wrote: [ -> ]... but this one keeps endlessly repeating itself most of the time.
Can you increase the temperature? Or whatever it's called in Llama? To decrease the inferring predictability.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13