Like Ra in latex catsuit, latex mask and high heels
Like Ra's Naughty Playground

transparent pantyhose
Women's 1 Deniers Oil Shine High Waist Tights Glitter Pantyhose Open Crotch Sexy Shiny Super Thin Transparent Glitter Medias
$6.03-20%

ladyboy
Gaff Camel Toe Underwear Crossdresser Fake Vagina Panty Control Gaff Transgender Ladyboy Gay Mens Sissy Thongs Cross-dressing
$390.37

masturbator
Realistic Vagina Male Masturbator Man Toys for Adults Real Ass Rubber Women Silicone Half Body Sexdoll Sex Dolls Masturbation 18
$393.30-50%

aonihua women one
2024 Sexy Summer Swimwear Womens Rash Guard UPF 50+ Swim Shirt One Piece Daving Bathing Suit Swimsuit Beach Wear Bodysuit
$33.87

mens shiny pantyhose
Sexy Men 1D Oil Shiny Pantyhose Sheer See Through U Convex Pouch Stockings Sexy Glossy Pantyhose Tights Plus Size
$8.86-53%

bondage body
B.CYQZ Sexy Body Bondage Women Full Harness Bdsm Tights Lingerie Goth PU Leather Accessories Suspenders Clothing Rave Underwear
$24.87

prostate stimulation
Anal Sex Toys Anal Plug Butt Plug Prostate Massager Sex Toys For Men Woman Beginne Erotic Toys Sex Product G Spot Stimulation
$2.80



To view Aliexpress you might need to switch your mobile browser to the Desktop version.


Magnetic lock for self-bondage
Magnetic lock for self-bondage
€145

If you would like to use search functions, view hidden archives or play games, please consider registering.


Tumblr is virtually dead
#21
It's a "standard" UNIX program/library, called "curl", not something written by me.
https://en.wikipedia.org/wiki/CURL
Reply
#22
before the "purge", I was using their old v1 API to download new images from a list of tumblrs...
it still seems to work, even without authentication...

so basically this call to get a list of photo-posts:
curl -s "${TUMBLR}/api/read?type=photo&num=${MAX_POSTS}&start=0"
(with $TUMBLR being the blog's URL)

and then grep for 'photo-url', a bit sed-magic and wget...
If you want, I can post the script...
Reply
#23
May not be a bad idea. At lease if it’s posted here, it can’t get deleted unless Ra kills it, and we can reference it when we need it.

Does that make sense?
Reply
#24
aight, here it is...

you pass it the URL of the blog you want to download. It will create directories for the images and for status-files (they store the id of the latest downloaded post, so subsequent runs won't re-download images)
You'll probably need to adapt the WORKDIR variable...

Code:
#!/bin/bash

TUMBLR="$1"
DIRNAME=$(sed -e "s/[^[:alnum:]]/_/g" <<<$TUMBLR)
WORKDIR=~/workspace/tumblr
IMAGE_DIR="images/$DIRNAME"
MAX_POSTS=100

cd "$WORKDIR"
mkdir -p status "$IMAGE_DIR"

STATFILE="status/$DIRNAME"

test -e "$STATFILE" && LAST_POST=$(cat "$STATFILE")
LAST_POST=${LAST_POST:-0}
NEWEST_POST=$LAST_POST

getPost(){
       MAX_SIZE=0
       PHOTO=""

       #iterate over all photo-urls and choose only the biggest pic
       for PHOTO_URL in $(sed -e "s/<photo-url/\n<photo-url/g" <<<$1| grep "<photo-url "); do
               PHOTO_TMP=$(echo "$PHOTO_URL" | sed -e "s/.*>\(.*\)<\/photo-url.*/\1/")
               SIZE=$(echo "$PHOTO_URL" | sed -e "s/.*max-width=\"\([[:digit:]]\+\)\".*/\1/")
               
               if [ $SIZE -gt $MAX_SIZE ]; then
                       MAX_SIZE=$SIZE
                       PHOTO=$PHOTO_TMP
               fi
       done

       #wait until no more than 5 wget processes are running
       while [ $(pgrep -c wget) -gt 5 ]; do
               sleep 1s
       done

       if [ "$PHOTO" ]; then
               echo "downloading $PHOTO"
               wget -P "$IMAGE_DIR" -nc -b "$PHOTO"
       fi
}


POSTS=$(curl -s "$TUMBLR/api/read?type=photo&num=$MAX_POSTS&start=0" | sed -e "s/<post/\n<post/g" | grep "<post ")
if [ -z "$POSTS" ]; then
       TUMBLR=$(sed -e "s/http:/https:/" <<<$TUMBLR)
       POSTS=$(curl -s "$TUMBLR/api/read?type=photo&num=$MAX_POSTS&start=0" | sed -e "s/<post/\n<post/g" | grep "<post ")
fi

echo "checking $TUMBLR..."

IFS='
'
for POST in $POSTS; do
       NUMBER=$(sed -e "s/<post\ id=\"\([[:digit:]]\+\)\".*/\1/" <<<$POST)
       
       if [ $NUMBER -gt $LAST_POST ]; then
               getPost "$POST"
               if [ $NUMBER -gt $NEWEST_POST ]; then
                       NEWEST_POST=$NUMBER
               fi
       fi
done

echo $NEWEST_POST > $STATFILE
Reply
#25
Good one! Thanks! The way do rip the entire tumblr blogs! (Unless there's an API limitation).

But for the automatic update I need to get access to the RSS. Or ... create a new wordpress plugin, what would require much more time 😁

Need to check the API...
Reply
#26
(01 Mar 2019, 16:28 )occorics Wrote:
Code:
while [ $(pgrep -c wget) -gt 5 ]; do
    sleep 1s
done

I take it you run several scripts in parallel, right? Otherwise (for one blog only) I would add an '&' at the end of the 'wget' line:

Code:
wget -P "$IMAGE_DIR" -nc -b "$PHOTO"&
Reply
#27
Can’t you take an existing Wordpress Plugin and adapt it suit the needs rather than creating a whole new plugin?
Reply
#28
(02 Mar 2019, 02:13 )Tinker D Wrote: Can’t you take an existing Wordpress Plugin and adapt it
That's what I usually do - tinkering 😁

The problem is - there are no similar plugins around. The one I use for the RSS feeds was deleted by the author from everywhere and is, obviously, not supported. It's quite complex and buggy, but it works so far. The most recent version has some bugs fixed, but the caching is removed, what catastrophically degraded the performance, so I use one of the versions, that, actually, needs to be rewritten.

The bottom line is - currently I tweak only what can be easily tweaked without digging into the details, and what will bring the maximum benefit. (BTW, I'm not even a programmer 😁 )
Reply
#29
There is no access to the RSS feed in either APIv1 or APIv2, but what might work is the following:

Program 1:
o- Grab the last x posts in the XML format
o- Convert the XML into the RSS XML
o- Store the results in memcache, indexed by the site name

Program 2:
o- Serve HTTP requests from the locallost
o- Return memcache entry according to the site name in the query

Then configure the RSS feed grabber (the current WP plugin) to use the Program 2 URL

The problem #1 - figure out how to convert the Tumblr XML response to an RSS feed.
Reply
#30
(01 Mar 2019, 17:54 )Like Ra Wrote: Good one! Thanks! The way do rip the entire tumblr blogs! (Unless there's an API limitation).

But for the automatic update I need to get access to the RSS. Or ... create a new wordpress plugin, what would require much more time 😁

Need to check the API...

It will only rip everything new. That's what the STATFILE is for. It stores the id of the latest downloaded post and the script stops there on the next run. For the first run, it's limited by MAX_POSTS
Reply




Contributors: brothermarcus (1) , essanym (2) , krinlyc (1) , Like Ra (18) , madjack (3) , occorics (8) , Tinker D (4)