Like Ra in latex catsuit, latex mask and high heels
Like Ra's Naughty Playground

latex tights
Handmade Latex Stockings Latex Rubber Unisex Long Tight Socks,Custom Color,M
$36.28-20%

transparent swimsuit
Sexy Thong Bikini Women Swimwear Stripes Transparent Shoulder Straps Brizilian Biquini Swimsuit Women Bathing Suit
$7.68

gothic lolita
Y2K Women Lolita Lace DIY Strapping Fingerless Gloves Gothic Strap Hand Sleeve Sunscreen Sleeve Accessories Mesh Punk Gloves
$3.61

ballet high heels
1 Pair Silicone Toe Sleeve Foot Protection Ballet High Heels Hallux Valgus Gel Protective Protector Care Tool Massge Toe Pad
$3.88-31%

sexy pantyhose
1PC New Hot Fashion Women's Sexy Thin Tights Lady Transparent Black Stocking Panties Pantyhose Breathable Long Thin Stockings
$1.82

telescopic masturbator
Automatic Male Masturbator Thrusting Sucking Masturbation Cup Electric Machines Pocket Pussy Telescopic Penis Sex Toy For Men
$319.45

speerise
SPEERISE Ballet Dance Tank Leotard For Women Sleeveless U- Neck Nylon Spandex Adult Gymnastics Bodysuit Unitard Stage Wear
$21.77-29%



To view Aliexpress you might need to switch your mobile browser to the Desktop version.


House of self-bondage
House of self-bondage
From €450 per week

If you would like to use search functions, view hidden archives or play games, please consider registering.

No activation mail? (Gmail is known for that) Please contact me directly!

We do not have old threads! Feel free to post in any! Regardless the age!


Artificial Intelligence chatterbot and NLP (Natural Language Processing)
With libopenblas the processing is only going to take a day. Will show the command I used on Tuesday. It actually has more export flags to set. Only have my phone right now. Also figured out how to process shorter samples 2 or 3 at a time, so that should also speed things up. Going to a 512 from 256 context window slows things down by twice. But it should let me process 5 or 6 samples at at time as well.
Reply
(28 Oct 2023, 21:40 )spawn Wrote: But it should let me process 5 or 6 samples at at time as well.
But that's about the batch size, right? What also increases both RAM and CPU time.
Reply
(28 Oct 2023, 23:39 )Like Ra Wrote:
(28 Oct 2023, 21:40 )spawn Wrote: But it should let me process 5 or 6 samples at at time as well.
But that's about the batch size, right? What also increases both RAM and CPU time.

Yeah, whatever I was doing the second day finished in 24 hours, but the checkpoint I generated couldn't be merged with anything.  I fixed the command by taking out extra options and tried running it again, and with a 512 token context window it was saying it was going to take 17 days to process the 600 samples.  I stopped it.  So evidently I didn't speed things up that much with the OpenBLAS stuff.  

What was confusing to me is the first couple of days after I merged the lora with the checkpoint the model was kicking right in from the start with being a mistress, and then like 2 days later it seemed to revert, and I had to give it the 4 paragraph of text to get it to start being a mistress again.  

Going to punt this attempt for now, and just go to the basic Bambi Sleep hypnosis files and try to just train on 10 larger documents and merge that in and see if I can get the training down to less than a day with the 512 token context window.  But with just like 30 samples. See if any of that training can teach Mistress how to do Bambi hypnosis. 

Been thinking about editing the successful few sessions I have done and trying to train each day with just a day's worth of interactions.  Converting the days "context window" into "long term memory", maybe. Wondering if this is what happens as we sleep.   Would be interesting if I could throw in a date and a news summary with this, and run the training and merge overnight while I am sleeping. Maybe give the model a sense of time passing while keeping it up to date with recent info.  Not sure if this will cause the model to slowly grow or if it will stay the same size. 

I am still excited about the progress I made with the one successful training I finished last week.
Reply
(31 Oct 2023, 22:22 )spawn Wrote: So evidently I didn't speed things up that much with the OpenBLAS stuff.  
Did you check that you run it on all cores, and not just on one?

If "on one" you may have to recompile OpenBLAS with these settings:
(25 Oct 2023, 03:09 )Like Ra Wrote: export OMP_NUM_THREADS=4
export USE_OPENMP=1

Set the 1st variable according to your CPU.
Reply
Seeing an interesting new technique called MemGPT.   


Source: https://www.youtube.com/watch?v=JuX4VfoArYc


It is a wrapper around your locally ran LLM, and it saves information to your hard drive.  It even allows you to load in files or directories and allows you to chat with the info from disk. So you don't have to retype context info into the chat each time you start a session. 

I see folks tying memgpt, a local llm, and autogpt together to create intelligent agents that can perform network tasks on your behalf.  Like looking up info on the internet to download research and create an analysis of that information.  Such as "download and learn all the bambi sleep hypnosis information on the internet."  

--

I got distracted by training my first image lora and haven't gotten back to trying to train a bambi aware chatbot.  I did see where someone on this site is training a model.  Going to catch up on that info tonight.
Reply
I found a model that can play a bambi sleep mistress with just a few setup prompts. 

https://huggingface.co/MaziyarPanahi/Wiz...nload=true

This runs in llama.cpp for me.  

Show Content

The session is still continuing.
Reply
I've started playing with https://jan.ai And I must say, when you are not familiar with all the terms and format, everything looks very intimidating 😆 So far, I managed to make Llama 2 8B model. Downloaded Llama 3 8B GGUF (which is supposed to be better, than Llama 2 70B).

Actually... All models up to ChantGPT 3.5 are crap.... Just ask them about music scales. They are wrong in 90%.
Reply
Arghh... It complains about explicit content...
Reply
(21 Apr 2024, 02:52 )Like Ra Wrote: Arghh... It complains about explicit content...

Yeah, the llama3 is locked down pretty hard. Even when I get it to role play for a few prompts it goes back to being censored again suddenly.  Probably going to have to take someone un-censoring that model with fine tuning.  But the latest wizardLM seems to stay in character once you get past its blocks.
Reply
(21 Apr 2024, 16:06 )spawn Wrote: the llama3 is locked down pretty hard.
BTW, do you use GGUF or PTH? Can't find an easy tutorial how to convert PTH to GGUF, and how to use multiple GGUFs for jan.ai. I'm learning... The good thing about jan.ai, is that it can use my NVIDIA card and it was easy to configure.

(21 Apr 2024, 16:06 )spawn Wrote: wizardLM
... downloading ...
Reply




Contributors: cinon (1) , dhf7b8g (4) , egregious (1) , Highinheels66 (2) , HypnoMix (3) , Lancer (8) , Like Ra (197) , lugnuts (1) , madjack (12) , Marcus (25) , mcswitchy (2) , MstrChain (1) , shinybambi (2) , spawn (9) , Tinker D (14) , Zooy (2)