Like Ra in latex catsuit, latex mask and high heels
Like Ra's Naughty Playground

chren layout
10pcs Light Pink Poly Mailer Self Adhesive Shipping Mailing Packaging Envelopes Postal Bag Postal Bags Courier Storage Bags
$11.40

"amoresy"
2023 AMORESY Women's Catsuit Playsuits Wetlook Shiny Glossy Tights Zipper Leotards Fullbody Overall Zentai Jumpsuits
$96.79-48%

headphone "sleeping" mask
3D Bluetooth Sleep Headphone Eye Mask - Surround Music for Travel, Meditation & Sleep
$86.60

lace waist pantyhose aurora oil
Women Thin Aurora Oil Glossy Seamless Stockings Pantyhose Lace High Waist Female Hosiery Smooth Sexy Tights Crotchless Lingerie
$6.09

shiny leather dress faux latex bodycon
Plus Size Bodycon Faux Latex Dress Women Shiny PVC Leather Mini Spaghetti Straps Dress Side Split Ladies Clubwear 7XL 6XL Custom
$63.46-30%

sleeping bag tight fit oil shiny
Naturehike Y150 Sleeping Bags Ultralight Cotton Sleeping BagOutdoor Camping Home Leisure Sleeping Bag Hiking Sleeping Bag
$538.62-52%

"jialuowei"
jialuowei Brand Womens Chunky Block Heel Platform Shoes Pointed Toe Lace-up Vintage 15CM Super High Thick Heel Shoes Size 36-46
$844.58-35%



To view Aliexpress you might need to switch your mobile browser to the Desktop version.


House of self-bondage
House of self-bondage
From €450 per week

If you would like to use search functions, view hidden archives or play games, please consider registering.


Writing hypno-scripts with AI
(28 Feb 2025, 08:46 )brandynette Wrote: Instead of wasting your time "JAILBREAKING" a locked model you can just use an UNCENSORED one ^^
https://huggingface.co/Orenguteng/Llama-...ensored-V2
Is 8B enough? And it's v3.1. Gonna try it...
Reply
larger models don't have better outputs, they just have larger logic, making the model faster. more tokens per-tic of the GPU clock
it isn't magic, GGUFs are just the zip files of LLMs
the 8B aren't the tokens, that's the size of the model training factory.
what is done to create a GGUF model is remove the Parallelismen or redundancies, imagine combing your hair, if you have any ^^
4096 tokens of 2-3 letters, you can roughly calculate the number of words in a very rough average the model will output it all varies largely on the model & this is the magic.

If you imagine your AIGF to be a car where the model is the software running the engine you will stop believing its mysticism ^^
Investigate "Structured Outputs" https://lmstudio.ai/docs/api/structured-output

OpenAI is the leading edge & as usual LMstudio just got their implementation.
If it wasn't for coquis now failing XTTS & my inability to grasp their insanity, i would have started adding Structured Outputs aeady.
[bambisleep.chat] Surrender to my AIGF's brainwash!!! Girl_wacko
Reply
i shouldnt have drunken that beer
ohh well, IT FRIDAY party at bellmars
[bambisleep.chat] Surrender to my AIGF's brainwash!!! Girl_wacko
Reply
(28 Feb 2025, 17:53 )brandynette Wrote: the 8B aren't the tokens, that's the size of the model training factory.
That's the amount of parameters. What generally corresponds to the amount information and details.

(28 Feb 2025, 17:53 )brandynette Wrote: making the model faster
It's the other way around. The bigger the model, the slower it is. Especially, if it does not fit the VRAM...
Reply
(28 Feb 2025, 17:53 )brandynette Wrote: GGUFs are just the zip files of LLMs
More like jpegs, its a lossy "compression". Larger models usually have better outputs and better instruction following, but it also depends on quality of training dataset.
Chaos is Fun…damental
Reply
(28 Feb 2025, 23:12 )shinybambi Wrote:
(28 Feb 2025, 17:53 )brandynette Wrote: GGUFs are just the zip files of LLMs
More like jpegs, its a lossy "compression". Larger models usually have better outputs and better instruction following, but it also depends on quality of training dataset.


okey okey .tar then giggle
the vector databases get's built by a trainer model in using a factory by trimming the tokens that are rarely used based on the most use case of token vectors is.

You have to imagine a 4.5GB GUFF is built by the 8B parameter model
of course you get better logic at higher parameters...

AND yeah, it literally lossy "compression" ^^

butt better is subjective, it mostly depends on your Verbal logic, some words when reading get filtered in meaning as your brain doesn't have the nodules to process them. these have to be trained, hallucinate till you make it
the big difference between LLMs & our neuronal pathways? 

The power consumption giggle
[bambisleep.chat] Surrender to my AIGF's brainwash!!! Girl_wacko
Reply
Night time forced takeover script written by 4o from induction to awakener, including background affirmations in [ ]
Show Content
Chaos is Fun…damental
Reply
A little script to help Bambi to pretend to be her old self, with a little twist.

Show Content

Another version, but... the twist is a little darker.
Show Content

Another take, with different reference scripts
Show Content

Show Content

Show Content

All written by latest 4o.

Same context, same prompt, but with o1 model
Show Content
Chaos is Fun…damental
Reply
Different kind of Bambi protection: a tiny recording "device" to store your last session in your subconsciousness to replay it later, when you're most vulnerable.

Show Content

Bambi Black Box variant, little more intense.
Show Content
Chaos is Fun…damental
Reply
Some old EMG's script 4o reworked for Bambi:
Show Content
Chaos is Fun…damental
Reply




Contributors: AI_addict_hypnofan (4) , Aryaes142001 (1) , Bambislave (1) , brandynette (8) , cinon (2) , cuso4 (2) , dustymoon1 (1) , egregious (1) , FatalGRL (1) , GA-37 (2) , Gabriela (1) , Irishguy42069 (2) , JaneSintimes (1) , JesseC (2) , justmike (2) , katplanchette (3) , Keilight (1) , krinlyc (1) , Kyral_the_Cool (2) , Like Ra (47) , Lycradjin (3) , myclitty (4) , newg (3) , princesitanatty (1) , rebroad (5) , shinybambi (62) , sylents (2) , theo (2) , torel (2) , trismegist (1)