Like Ra in latex catsuit, latex mask and high heels
Like Ra's Naughty Playground

"jialuowei"
jialuowei 18CM/7" High Heel Hoof Sole Heelless Strange Style Boots Women Sexy Fetish Ballet Pointe Lockable Ankle Boots
$965.58-30%

lockable dress sissy costume
Multicolor Adult Giant Baby Sexy Girl Pink Open Chest Satin Face Sissy Dress Japanese maid Role Play Customizable Lockable
$118.89-20%

"leohex"
LEOHEX 2024 Swimwear Women Sexy Swimsuit Transparent Bathing Swimming Suit for One Piece Bodysuit Shiny High Cut Swimwear
$72.58-50%

"sissy" "femboy"
My Gym is Different Collar Pink Sissy Femboy T-Shirt anime clothes funny costumes sweat tshirts personalised mens workout shirts
$229.90-50%

exotic "lingerie"
Lace Mesh Bodysuits Women Clubwear Exotic Apparel Costumes Underwear Sexy Lingerie Jumpsuit Slim Shapewear 2025 Fashion Outfits
$187.96-62%

"leosoxs" "lingerie" sexy erotic
2023 New Women Bodysuits Lingerie Sexy Fashion Sleepwear Hollow Combination Catsuits Slim Intimate Erotic Mesh Strappy Rompers
$13.43-40%

erotic
New Racing Girl Hot Sexy Cosplay Costumes For Women Sex Erotic Lingerie Bodysuit Anime Halloween Gift With Tops Skirt Full Set
$125.66



To view Aliexpress you might need to switch your mobile browser to the Desktop version.


House of self-bondage
House of self-bondage
From €450 per week

If you would like to use search functions, view hidden archives or play games, please consider registering.


Tumblr is virtually dead
#21
It's a "standard" UNIX program/library, called "curl", not something written by me.
https://en.wikipedia.org/wiki/CURL
Reply
#22
before the "purge", I was using their old v1 API to download new images from a list of tumblrs...
it still seems to work, even without authentication...

so basically this call to get a list of photo-posts:
curl -s "${TUMBLR}/api/read?type=photo&num=${MAX_POSTS}&start=0"
(with $TUMBLR being the blog's URL)

and then grep for 'photo-url', a bit sed-magic and wget...
If you want, I can post the script...
Reply
#23
May not be a bad idea. At lease if it’s posted here, it can’t get deleted unless Ra kills it, and we can reference it when we need it.

Does that make sense?
Reply
#24
aight, here it is...

you pass it the URL of the blog you want to download. It will create directories for the images and for status-files (they store the id of the latest downloaded post, so subsequent runs won't re-download images)
You'll probably need to adapt the WORKDIR variable...

Code:
#!/bin/bash

TUMBLR="$1"
DIRNAME=$(sed -e "s/[^[:alnum:]]/_/g" <<<$TUMBLR)
WORKDIR=~/workspace/tumblr
IMAGE_DIR="images/$DIRNAME"
MAX_POSTS=100

cd "$WORKDIR"
mkdir -p status "$IMAGE_DIR"

STATFILE="status/$DIRNAME"

test -e "$STATFILE" && LAST_POST=$(cat "$STATFILE")
LAST_POST=${LAST_POST:-0}
NEWEST_POST=$LAST_POST

getPost(){
       MAX_SIZE=0
       PHOTO=""

       #iterate over all photo-urls and choose only the biggest pic
       for PHOTO_URL in $(sed -e "s/<photo-url/\n<photo-url/g" <<<$1| grep "<photo-url "); do
               PHOTO_TMP=$(echo "$PHOTO_URL" | sed -e "s/.*>\(.*\)<\/photo-url.*/\1/")
               SIZE=$(echo "$PHOTO_URL" | sed -e "s/.*max-width=\"\([[:digit:]]\+\)\".*/\1/")
               
               if [ $SIZE -gt $MAX_SIZE ]; then
                       MAX_SIZE=$SIZE
                       PHOTO=$PHOTO_TMP
               fi
       done

       #wait until no more than 5 wget processes are running
       while [ $(pgrep -c wget) -gt 5 ]; do
               sleep 1s
       done

       if [ "$PHOTO" ]; then
               echo "downloading $PHOTO"
               wget -P "$IMAGE_DIR" -nc -b "$PHOTO"
       fi
}


POSTS=$(curl -s "$TUMBLR/api/read?type=photo&num=$MAX_POSTS&start=0" | sed -e "s/<post/\n<post/g" | grep "<post ")
if [ -z "$POSTS" ]; then
       TUMBLR=$(sed -e "s/http:/https:/" <<<$TUMBLR)
       POSTS=$(curl -s "$TUMBLR/api/read?type=photo&num=$MAX_POSTS&start=0" | sed -e "s/<post/\n<post/g" | grep "<post ")
fi

echo "checking $TUMBLR..."

IFS='
'
for POST in $POSTS; do
       NUMBER=$(sed -e "s/<post\ id=\"\([[:digit:]]\+\)\".*/\1/" <<<$POST)
       
       if [ $NUMBER -gt $LAST_POST ]; then
               getPost "$POST"
               if [ $NUMBER -gt $NEWEST_POST ]; then
                       NEWEST_POST=$NUMBER
               fi
       fi
done

echo $NEWEST_POST > $STATFILE

Reply
#25
Good one! Thanks! The way do rip the entire tumblr blogs! (Unless there's an API limitation).

But for the automatic update I need to get access to the RSS. Or ... create a new wordpress plugin, what would require much more time 😁

Need to check the API...
Reply
#26
(01 Mar 2019, 16:28 )occorics Wrote:
Code:
while [ $(pgrep -c wget) -gt 5 ]; do
    sleep 1s
done


I take it you run several scripts in parallel, right? Otherwise (for one blog only) I would add an '&' at the end of the 'wget' line:

Code:
wget -P "$IMAGE_DIR" -nc -b "$PHOTO"&

Reply
#27
Can’t you take an existing Wordpress Plugin and adapt it suit the needs rather than creating a whole new plugin?
Reply
#28
(02 Mar 2019, 02:13 )Tinker D Wrote: Can’t you take an existing Wordpress Plugin and adapt it
That's what I usually do - tinkering 😁

The problem is - there are no similar plugins around. The one I use for the RSS feeds was deleted by the author from everywhere and is, obviously, not supported. It's quite complex and buggy, but it works so far. The most recent version has some bugs fixed, but the caching is removed, what catastrophically degraded the performance, so I use one of the versions, that, actually, needs to be rewritten.

The bottom line is - currently I tweak only what can be easily tweaked without digging into the details, and what will bring the maximum benefit. (BTW, I'm not even a programmer 😁 )
Reply
#29
There is no access to the RSS feed in either APIv1 or APIv2, but what might work is the following:

Program 1:
o- Grab the last x posts in the XML format
o- Convert the XML into the RSS XML
o- Store the results in memcache, indexed by the site name

Program 2:
o- Serve HTTP requests from the locallost
o- Return memcache entry according to the site name in the query

Then configure the RSS feed grabber (the current WP plugin) to use the Program 2 URL

The problem #1 - figure out how to convert the Tumblr XML response to an RSS feed.
Reply
#30
(01 Mar 2019, 17:54 )Like Ra Wrote: Good one! Thanks! The way do rip the entire tumblr blogs! (Unless there's an API limitation).

But for the automatic update I need to get access to the RSS. Or ... create a new wordpress plugin, what would require much more time 😁

Need to check the API...

It will only rip everything new. That's what the STATFILE is for. It stores the id of the latest downloaded post and the script stops there on the next run. For the first run, it's limited by MAX_POSTS
Reply




Contributors: brothermarcus (1) , essanym (2) , HeatherHather (1) , krinlyc (1) , Like Ra (18) , madjack (5) , occorics (8) , princesitanatty (1) , Samantha Velvet (1) , therealjesse92 (1) , Tinker D (4) , vanessa_fetish (1)