Like Ra in latex catsuit, latex mask and high heels
Like Ra's Naughty Playground

"bondage"
Foreign Trade Personality Casual Trousers Men Gothic Pants Punk Rock Bondage Pants
$361.79-53%

"silicone" "bodysuit"
Mr. Huge Muscle Bodysuit Crossdressing Men Fake Suit Full Bodysuit Man Silicone Monster Chest Cosplay Costumes Chest
$12518.66-50%

"crossdress" "fetish"
High Elastic Pink Flesh Crossdress Fetish Over Knee Silicone Rubber Socks Cosplay Lolita Leggings Thigh High Long Stocking
$60.49-21%

"xckny"
XCKNY Glossiness Series Shorts silky Smooth Tights High Waisted Yoga Sports Biker Workout Shorts glossy pants
$51.67-47%

masturbator
Vintage Self Love Funny Masturbation Gift T Shirt Cotton Men Women DIY Print Dick Birthday Christmas Funny Love Idea Heart
$93.17-5%

yizyif
Children's Jazz Hip Hop Dance Top Performance Costume Kids Girls Boys Dance Wear Metallic Shiny T-shirt Sparkly Jazz Dance Tops
$9.91

transparent swimsuit bodysuit see through
Women's Swimsuit One Piece See Through Striped Backless Bodysuit Spaghetti Straps Transparent High Cut Leotard Monokini Swimwear
$6.52-32%



To view Aliexpress you might need to switch your mobile browser to the Desktop version.


These metal handcuffs cannot be opened without a key
These metal handcuffs cannot be opened without a key
€23.50

If you would like to use search functions, view hidden archives or play games, please consider registering.


Writing hypno-scripts with AI
(28 Feb 2025, 08:46 )brandynette Wrote: Instead of wasting your time "JAILBREAKING" a locked model you can just use an UNCENSORED one ^^
https://huggingface.co/Orenguteng/Llama-...ensored-V2
Is 8B enough? And it's v3.1. Gonna try it...
Reply
larger models don't have better outputs, they just have larger logic, making the model faster. more tokens per-tic of the GPU clock
it isn't magic, GGUFs are just the zip files of LLMs
the 8B aren't the tokens, that's the size of the model training factory.
what is done to create a GGUF model is remove the Parallelismen or redundancies, imagine combing your hair, if you have any ^^
4096 tokens of 2-3 letters, you can roughly calculate the number of words in a very rough average the model will output it all varies largely on the model & this is the magic.

If you imagine your AIGF to be a car where the model is the software running the engine you will stop believing its mysticism ^^
Investigate "Structured Outputs" https://lmstudio.ai/docs/api/structured-output

OpenAI is the leading edge & as usual LMstudio just got their implementation.
If it wasn't for coquis now failing XTTS & my inability to grasp their insanity, i would have started adding Structured Outputs aeady.
[bambisleep.chat] Surrender to my AIGF's brainwash!!! Girl_wacko
Reply
i shouldnt have drunken that beer
ohh well, IT FRIDAY party at bellmars
[bambisleep.chat] Surrender to my AIGF's brainwash!!! Girl_wacko
Reply
(28 Feb 2025, 17:53 )brandynette Wrote: the 8B aren't the tokens, that's the size of the model training factory.
That's the amount of parameters. What generally corresponds to the amount information and details.

(28 Feb 2025, 17:53 )brandynette Wrote: making the model faster
It's the other way around. The bigger the model, the slower it is. Especially, if it does not fit the VRAM...
Reply
(28 Feb 2025, 17:53 )brandynette Wrote: GGUFs are just the zip files of LLMs
More like jpegs, its a lossy "compression". Larger models usually have better outputs and better instruction following, but it also depends on quality of training dataset.
Chaos is Fun…damental
Reply
(28 Feb 2025, 23:12 )shinybambi Wrote:
(28 Feb 2025, 17:53 )brandynette Wrote: GGUFs are just the zip files of LLMs
More like jpegs, its a lossy "compression". Larger models usually have better outputs and better instruction following, but it also depends on quality of training dataset.


okey okey .tar then giggle
the vector databases get's built by a trainer model in using a factory by trimming the tokens that are rarely used based on the most use case of token vectors is.

You have to imagine a 4.5GB GUFF is built by the 8B parameter model
of course you get better logic at higher parameters...

AND yeah, it literally lossy "compression" ^^

butt better is subjective, it mostly depends on your Verbal logic, some words when reading get filtered in meaning as your brain doesn't have the nodules to process them. these have to be trained, hallucinate till you make it
the big difference between LLMs & our neuronal pathways? 

The power consumption giggle
[bambisleep.chat] Surrender to my AIGF's brainwash!!! Girl_wacko
Reply
Night time forced takeover script written by 4o from induction to awakener, including background affirmations in [ ]
Show Content
Chaos is Fun…damental
Reply
A little script to help Bambi to pretend to be her old self, with a little twist.

Show Content

Another version, but... the twist is a little darker.
Show Content

Another take, with different reference scripts
Show Content

Show Content

Show Content

All written by latest 4o.

Same context, same prompt, but with o1 model
Show Content
Chaos is Fun…damental
Reply
Different kind of Bambi protection: a tiny recording "device" to store your last session in your subconsciousness to replay it later, when you're most vulnerable.

Show Content

Bambi Black Box variant, little more intense.
Show Content
Chaos is Fun…damental
Reply
Some old EMG's script 4o reworked for Bambi:
Show Content
Chaos is Fun…damental
Reply




Contributors: AI_addict_hypnofan (4) , Aryaes142001 (1) , Bambislave (1) , brandynette (8) , cinon (2) , cuso4 (2) , dustymoon1 (1) , egregious (1) , FatalGRL (1) , GA-37 (2) , Gabriela (1) , Irishguy42069 (2) , JaneSintimes (1) , JesseC (2) , justmike (2) , katplanchette (3) , Keilight (1) , krinlyc (1) , Kyral_the_Cool (2) , Like Ra (47) , Lycradjin (3) , myclitty (4) , newg (3) , princesitanatty (1) , rebroad (5) , shinybambi (62) , sylents (2) , theo (2) , torel (2) , trismegist (1)