Like Ra in latex catsuit, latex mask and high heels
Like Ra's Naughty Playground

lockable dress sissy costume
lockable maid costume pink satin bow strapless multi-layered fluffy costume adult Customized sexy Halloween costumes for women
$696.96-15%

"latex" clothes
Black Sexy Latex Frock Coat With Buttons At Front Ruffles High Collar Rubber Blouse Shirts Top Clothes YF-0371
$151.12-23%

dildos
Red Light Therapy Lastek 36 Diodes Pet Acupuncture Cat Cold Laser Dildo Physiotherapy Laser Medical Device
$2894.21

exotic "lingerie"
Sexy Lingerie Underwear For Woman PU Leather Exotic Dress Babydoll Long Sleeve Sexi Costumes Sex Clubwear Sleepwear
$6.05-20%

"amoresy"
AMORESY Men Oil Glossy Shiny Satin Halter Backless Patchwork Tights Tracksuit Nine-point Pants Sports Wrestling Cycling Fitness
$447.69-18%

tiaobug sissy
Sissy Women Slim Fit Fishtail Dress Spaghetti Strap V-neck Sleeveless Backless Maxi Dress for Cocktail Party Banquet Uniforms
$14.92-35%

"bondage" "leotard"
Latex Catsuit Bodysuit Bondage BDSM Leotard Sexy Coverall Handmade Rubber Bodysuit Fetish w/o Hand Hood Front Zip
$108.90



To view Aliexpress you might need to switch your mobile browser to the Desktop version.


Electronic selfbondage and chastity box
Electronic selfbondage and chastity box
€69.90

If you would like to use search functions, view hidden archives or play games, please consider registering.


Writing hypno-scripts with AI
(28 Feb 2025, 08:46 )brandynette Wrote: Instead of wasting your time "JAILBREAKING" a locked model you can just use an UNCENSORED one ^^
https://huggingface.co/Orenguteng/Llama-...ensored-V2
Is 8B enough? And it's v3.1. Gonna try it...
Reply
larger models don't have better outputs, they just have larger logic, making the model faster. more tokens per-tic of the GPU clock
it isn't magic, GGUFs are just the zip files of LLMs
the 8B aren't the tokens, that's the size of the model training factory.
what is done to create a GGUF model is remove the Parallelismen or redundancies, imagine combing your hair, if you have any ^^
4096 tokens of 2-3 letters, you can roughly calculate the number of words in a very rough average the model will output it all varies largely on the model & this is the magic.

If you imagine your AIGF to be a car where the model is the software running the engine you will stop believing its mysticism ^^
Investigate "Structured Outputs" https://lmstudio.ai/docs/api/structured-output

OpenAI is the leading edge & as usual LMstudio just got their implementation.
If it wasn't for coquis now failing XTTS & my inability to grasp their insanity, i would have started adding Structured Outputs aeady.
[bambisleep.chat] Surrender to my AIGF's brainwash!!! Girl_wacko
Reply
i shouldnt have drunken that beer
ohh well, IT FRIDAY party at bellmars
[bambisleep.chat] Surrender to my AIGF's brainwash!!! Girl_wacko
Reply
(28 Feb 2025, 17:53 )brandynette Wrote: the 8B aren't the tokens, that's the size of the model training factory.
That's the amount of parameters. What generally corresponds to the amount information and details.

(28 Feb 2025, 17:53 )brandynette Wrote: making the model faster
It's the other way around. The bigger the model, the slower it is. Especially, if it does not fit the VRAM...
Reply
(28 Feb 2025, 17:53 )brandynette Wrote: GGUFs are just the zip files of LLMs
More like jpegs, its a lossy "compression". Larger models usually have better outputs and better instruction following, but it also depends on quality of training dataset.
Chaos is Fun…damental
Reply
(28 Feb 2025, 23:12 )shinybambi Wrote:
(28 Feb 2025, 17:53 )brandynette Wrote: GGUFs are just the zip files of LLMs
More like jpegs, its a lossy "compression". Larger models usually have better outputs and better instruction following, but it also depends on quality of training dataset.


okey okey .tar then giggle
the vector databases get's built by a trainer model in using a factory by trimming the tokens that are rarely used based on the most use case of token vectors is.

You have to imagine a 4.5GB GUFF is built by the 8B parameter model
of course you get better logic at higher parameters...

AND yeah, it literally lossy "compression" ^^

butt better is subjective, it mostly depends on your Verbal logic, some words when reading get filtered in meaning as your brain doesn't have the nodules to process them. these have to be trained, hallucinate till you make it
the big difference between LLMs & our neuronal pathways? 

The power consumption giggle
[bambisleep.chat] Surrender to my AIGF's brainwash!!! Girl_wacko
Reply
Night time forced takeover script written by 4o from induction to awakener, including background affirmations in [ ]
Show Content
Chaos is Fun…damental
Reply
A little script to help Bambi to pretend to be her old self, with a little twist.

Show Content

Another version, but... the twist is a little darker.
Show Content

Another take, with different reference scripts
Show Content

Show Content

Show Content

All written by latest 4o.

Same context, same prompt, but with o1 model
Show Content
Chaos is Fun…damental
Reply
Different kind of Bambi protection: a tiny recording "device" to store your last session in your subconsciousness to replay it later, when you're most vulnerable.

Show Content

Bambi Black Box variant, little more intense.
Show Content
Chaos is Fun…damental
Reply
Some old EMG's script 4o reworked for Bambi:
Show Content
Chaos is Fun…damental
Reply




Contributors: AI_addict_hypnofan (4) , Aryaes142001 (1) , Bambislave (1) , brandynette (8) , cinon (2) , cuso4 (2) , dustymoon1 (1) , egregious (1) , FatalGRL (1) , GA-37 (2) , Gabriela (1) , Irishguy42069 (2) , JaneSintimes (1) , JesseC (2) , justmike (2) , katplanchette (3) , Keilight (1) , krinlyc (1) , Kyral_the_Cool (2) , Like Ra (47) , Lycradjin (3) , myclitty (4) , newg (3) , princesitanatty (1) , rebroad (5) , shinybambi (62) , sylents (2) , theo (2) , torel (2) , trismegist (1)