Like Ra in latex catsuit, latex mask and high heels
Like Ra's Naughty Playground

wig hair
Allove 250 Density U Part Wig Human Hair Bone Straight Hair Wigs Glueless Brazilian Wigs On Sale Full Machine Made Wig For Women
$88.41-58%

transparent dress
Ellolace Sexy Cosaplay Housemaid Dress Transparent Erotic Lingerie Costume Night Outfit Sissy Fantasy Nightwear Hot Porn Body
$12.12

time lock bondage
New Time Lock Fetish Handcuffs Mouth Gag Electronic Timer Bdsm Bondage Restraints Chastity Couples Toys Adult Game Bondage Lock
$23.92-30%

Foreskin Corrector
2pcs/set Crystal Spike Penis Glans Sleeve Reusable Foreskin Corrector Cock Ring Delay Ejaculation Penis Ring Sex Toys For Men
$6.03-41%

rubber cosplay
Anime Game Genshin Impact Al Haitham Cartoon Mouse Pad Mice Mat Desktop Keyboard Mat Cosplay Otaku Gaming Playmat Xmas Gft
$32.52-23%

lobudek
Female Long Dress Mesh Crystals Rhinestones Silver Evening Party Celebration Transparent Sexy Stage Wear Performance Clothes
$181.98-25%

latex pencil skirt
Women Wet Look PVC Latex Pencil Skirt Vinyl Zipper Adjustable Button Shiny Clubwear Bodycon Mini Skirt
$18.36



To view Aliexpress you might need to switch your mobile browser to the Desktop version.


Electronic selfbondage and chastity box
Electronic selfbondage and chastity box
€69.90

If you would like to use search functions, view hidden archives or play games, please consider registering.

No activation mail? (Gmail is known for that) Please contact me directly!

We do not have old threads! Feel free to post in any! Regardless the age!


Artificial Intelligence chatterbot and NLP (Natural Language Processing)
(09 Mar 2023, 05:04 )dhf7b8g Wrote: 2. Torrent for LLaMA models appeared on 4chan almost instantly after downloads were given out to researchers
Any links?

(09 Mar 2023, 05:04 )dhf7b8g Wrote: I had to use a 70GB ram buffer
Is it a requirement? Or "a nice to have"? Can memory compression help (e.g. zram-config)? Are the requirements listed anywhere?

(09 Mar 2023, 05:04 )dhf7b8g Wrote: 24GB of vram
4090 ... 🤤 💸
Reply
(09 Mar 2023, 11:26 )Like Ra Wrote: Any links?
This is the original magnet from 4chan. magnet:?xt=urn:btih:b8287ebfa04f879b048d4d4404108cf3e8014352&dn=LLaMA&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce

There was a second magnet created and included in a pull request to the official LLaMA repo 

(09 Mar 2023, 11:26 )Like Ra Wrote: Is it a requirement? Or "a nice to have"? Can memory compression help (e.g. zram-config)? Are the requirements listed anywhere?
It's more of a nice to have. You can load from disk/swap in the webui I linked, but it will be much slower than having a model in VRAM, 8bit quant doesn't work when split across GPU+RAM/swap either.
Memory compression might help, but personally I haven't tried it.

There are no official requirements, as they expect you to have a handful of A100s if you are getting this model. I would recommend filesize+2GB as the amount of VRAM needed for "good" speeds as VRAM is a much more limiting factor than GPU power, but you can even run it from disk and using a CPU (as long as you don't care about speed)
Reply
Been searching the internet for one of Googles pet projects, specifically the one detailed in arxiv and their recent palm publication called GLaM, mostly since it claims to have a 50 percent computational decrease compared to chat gpt3 by using multiple feed forward networks at the same time.

Ps does anyone know of a transformer attention mechanism that isn't made in python?
Reply
(10 Mar 2023, 05:59 )Lancer Wrote: transformer attention mechanism that isn't made in python?
Performance-wise? Isn't what Torch (e.g. pytorch, LUA torch) is for? All computational intensive things are written in C.
Reply
(10 Mar 2023, 11:13 )Like Ra Wrote:
(10 Mar 2023, 05:59 )Lancer Wrote: transformer attention mechanism that isn't made in python?
Performance-wise? Isn't what Torch (e.g. pytorch, LUA torch) is for? All computational intensive things are written in C.

I thought it was only use greybeards that wrote C these days.
Everyone else seems to be script kiddies or glorified integration engineers (as that is what most of the jobs I see on offer are).

An awful lot of knowledge on how to write optimised code that runs on lower spec (ie less power hungry and cheaper) devices is being lost as we retire
Reply
(10 Mar 2023, 23:44 )egregious Wrote: I thought it was only use greybeards that wrote C these days.
https://github.com/pytorch/pytorch
C++ 45.8%
C 2.4%

https://github.com/torch/torch7
C 65.0%
C++ 0.5%

😊
Reply
(10 Mar 2023, 11:13 )Like Ra Wrote:
(10 Mar 2023, 05:59 )Lancer Wrote: transformer attention mechanism that isn't made in python?
Performance-wise? Isn't what Torch (e.g. pytorch, LUA torch) is for? All computational intensive things are written in C.

I am a java programmer and have okay ability to read and call c libraries but I cannot for the life of me find one that doesn't have pythons sticky fingers laced in. Using java to call python to call C is wasteful, very slow and requires strange project managers like maven which are the bane of my existence. In theory transformers aren't even supposed to be that hard so I am now attempting to code one from scratch but it is still a pain in the butt.
Reply
(10 Mar 2023, 23:44 )egregious Wrote:
(10 Mar 2023, 11:13 )Like Ra Wrote:
(10 Mar 2023, 05:59 )Lancer Wrote: transformer attention mechanism that isn't made in python?
Performance-wise? Isn't what Torch (e.g. pytorch, LUA torch) is for? All computational intensive things are written in C.

I thought it was only use greybeards that wrote C these days.
Everyone else seems to be script kiddies or glorified integration engineers (as that is what most of the jobs I see on offer are).

An awful lot of knowledge on how to write optimised code that runs on lower spec (ie less power hungry and cheaper) devices is being lost as we retire

Yeah coding in C seems to be an exercise in masochism and while I am not afraid to get my hands a bit dirty with coding I don't really feel too constrained by hardware limits. Even your average raspberry pi 4 is now quad core which means 1 core gets used to run the pi and 3 are up for grabs using multithreaded Java and with the innovations in virtual threading performance is less of a concern.
Reply
(10 Mar 2023, 23:44 )egregious Wrote:
(10 Mar 2023, 11:13 )Like Ra Wrote:
(10 Mar 2023, 05:59 )Lancer Wrote: transformer attention mechanism that isn't made in python?
Performance-wise? Isn't what Torch (e.g. pytorch, LUA torch) is for? All computational intensive things are written in C.

I thought it was only use greybeards that wrote C these days.
Everyone else seems to be script kiddies or glorified integration engineers (as that is what most of the jobs I see on offer are).

An awful lot of knowledge on how to write optimised code that runs on lower spec (ie less power hungry and cheaper) devices is being lost as we retire

My life's work is an application, written in C, with some added libraries written in C++, but those were written by others.
This application is used on very powerful computers to get the most out of them. The alternative for the users would be to buy several times as many computers or wait several years until the computers are more powerful.
But I am also retired now and I am trying to pass the proper attitude on to a number of the people who rely on this application.

One of the problems is that the new generation expects to pick up ready made modules without knowing what is in them, and other people who are so self-confident that they advertise their often not very high quality products. I have seen a publication where the authors were boasting that they could calculate some mathematical entity in 'only a few hours' on a decent computer, after which in the next issue somebody commented that he calculated the same quantity in a leisurely 5 minutes by hand and noticed the following errors in their answer......... Just imagine that you pick up such a program.
Reply
(11 Mar 2023, 22:08 )Lancer Wrote: Using java to call python to call C is wasteful
Totally agree! Just dump Java and everything is Python-simple and C-fast! 😬

(11 Mar 2023, 22:17 )Lancer Wrote: Even your average raspberry pi 4 is now quad core
Yep, what makes it very tempting to forget efficiency.

Actually, the most efficient scientific libraries are still written in Fortran.

(11 Mar 2023, 22:17 )Lancer Wrote: are up for grabs using multithreaded Java
Provided the code is written with multithreading in mind.

Speaking about neural nets - Java is not needed here. All you need is C(++) and Cuda for the low-level math calculations and Python for the interface. Who cares, that Python is slow, it's not used for calculations anyway. And if you need more speed from Python or Lua there is Cython (yes, C again) and Luajit (yes, C again).

And ... JVM/JRE and native Java libraries are also written in C/C++ 😆
Reply




Contributors: cinon (1) , dhf7b8g (4) , egregious (1) , Highinheels66 (2) , HypnoMix (3) , Lancer (8) , Like Ra (197) , lugnuts (1) , madjack (12) , Marcus (25) , mcswitchy (2) , MstrChain (1) , shinybambi (2) , spawn (9) , Tinker D (14) , Zooy (2)