Like Ra in latex catsuit, latex mask and high heels
Like Ra's Naughty Playground

transparent dress
Sexy Women's Sequins mesh dress Transparent Nightwear Set Black deep V-Neck Pyjamas Sleeveless Cute Cami dresses Erotic Lingerie
$11.65-60%

inlzdz
Kids Girls Spaghetti Strap Ballet Dance Leotards Criss Cross Jumpsuit Bodysuit Sleeveless Gymnastics Performance Clothes
$136.88-42%

darlingaga
Darlingaga Fashion Chic Bodycon Stripe Mini Skirt Women Buttons Y2K Aesthetic Hot Lace Patchwork Summer Skirts Gothic Dark Cute
$29.77-50%

darlingaga
Darlingaga Streetwear Ribbed Knitted Summer Camis Tops Solid Basic Padded Casual Summer Top Short All-Match Sporty Chic Outfits
$20.57-50%

amoresy
AMORESY Gaea series solid color and colorful Japanese style hot spring resort online celebrity sexy half-wrapped buttocks compet
$80.65-15%

wig hair
Allove 250 Density U Part Wig Human Hair Bone Straight Hair Wigs Glueless Brazilian Wigs On Sale Full Machine Made Wig For Women
$88.41-61%

tpe penis
Vibrating Condoms Enlargement For Men Reusable Condoms Dildo Vibrator Penis Ring Sleeve Adult Sex Shop Sex Toy for Couples
$5.55-1%



To view Aliexpress you might need to switch your mobile browser to the Desktop version.


Magnetic lock for self-bondage
Magnetic lock for self-bondage
€145

If you would like to use search functions, view hidden archives or play games, please consider registering.


Tumblr is virtually dead
#21
It's a "standard" UNIX program/library, called "curl", not something written by me.
https://en.wikipedia.org/wiki/CURL
Reply
#22
before the "purge", I was using their old v1 API to download new images from a list of tumblrs...
it still seems to work, even without authentication...

so basically this call to get a list of photo-posts:
curl -s "${TUMBLR}/api/read?type=photo&num=${MAX_POSTS}&start=0"
(with $TUMBLR being the blog's URL)

and then grep for 'photo-url', a bit sed-magic and wget...
If you want, I can post the script...
Reply
#23
May not be a bad idea. At lease if it’s posted here, it can’t get deleted unless Ra kills it, and we can reference it when we need it.

Does that make sense?
Reply
#24
aight, here it is...

you pass it the URL of the blog you want to download. It will create directories for the images and for status-files (they store the id of the latest downloaded post, so subsequent runs won't re-download images)
You'll probably need to adapt the WORKDIR variable...

Code:
#!/bin/bash

TUMBLR="$1"
DIRNAME=$(sed -e "s/[^[:alnum:]]/_/g" <<<$TUMBLR)
WORKDIR=~/workspace/tumblr
IMAGE_DIR="images/$DIRNAME"
MAX_POSTS=100

cd "$WORKDIR"
mkdir -p status "$IMAGE_DIR"

STATFILE="status/$DIRNAME"

test -e "$STATFILE" && LAST_POST=$(cat "$STATFILE")
LAST_POST=${LAST_POST:-0}
NEWEST_POST=$LAST_POST

getPost(){
       MAX_SIZE=0
       PHOTO=""

       #iterate over all photo-urls and choose only the biggest pic
       for PHOTO_URL in $(sed -e "s/<photo-url/\n<photo-url/g" <<<$1| grep "<photo-url "); do
               PHOTO_TMP=$(echo "$PHOTO_URL" | sed -e "s/.*>\(.*\)<\/photo-url.*/\1/")
               SIZE=$(echo "$PHOTO_URL" | sed -e "s/.*max-width=\"\([[:digit:]]\+\)\".*/\1/")
               
               if [ $SIZE -gt $MAX_SIZE ]; then
                       MAX_SIZE=$SIZE
                       PHOTO=$PHOTO_TMP
               fi
       done

       #wait until no more than 5 wget processes are running
       while [ $(pgrep -c wget) -gt 5 ]; do
               sleep 1s
       done

       if [ "$PHOTO" ]; then
               echo "downloading $PHOTO"
               wget -P "$IMAGE_DIR" -nc -b "$PHOTO"
       fi
}


POSTS=$(curl -s "$TUMBLR/api/read?type=photo&num=$MAX_POSTS&start=0" | sed -e "s/<post/\n<post/g" | grep "<post ")
if [ -z "$POSTS" ]; then
       TUMBLR=$(sed -e "s/http:/https:/" <<<$TUMBLR)
       POSTS=$(curl -s "$TUMBLR/api/read?type=photo&num=$MAX_POSTS&start=0" | sed -e "s/<post/\n<post/g" | grep "<post ")
fi

echo "checking $TUMBLR..."

IFS='
'
for POST in $POSTS; do
       NUMBER=$(sed -e "s/<post\ id=\"\([[:digit:]]\+\)\".*/\1/" <<<$POST)
       
       if [ $NUMBER -gt $LAST_POST ]; then
               getPost "$POST"
               if [ $NUMBER -gt $NEWEST_POST ]; then
                       NEWEST_POST=$NUMBER
               fi
       fi
done

echo $NEWEST_POST > $STATFILE
Reply
#25
Good one! Thanks! The way do rip the entire tumblr blogs! (Unless there's an API limitation).

But for the automatic update I need to get access to the RSS. Or ... create a new wordpress plugin, what would require much more time 😁

Need to check the API...
Reply
#26
(01 Mar 2019, 16:28 )occorics Wrote:
Code:
while [ $(pgrep -c wget) -gt 5 ]; do
    sleep 1s
done

I take it you run several scripts in parallel, right? Otherwise (for one blog only) I would add an '&' at the end of the 'wget' line:

Code:
wget -P "$IMAGE_DIR" -nc -b "$PHOTO"&
Reply
#27
Can’t you take an existing Wordpress Plugin and adapt it suit the needs rather than creating a whole new plugin?
Reply
#28
(02 Mar 2019, 02:13 )Tinker D Wrote: Can’t you take an existing Wordpress Plugin and adapt it
That's what I usually do - tinkering 😁

The problem is - there are no similar plugins around. The one I use for the RSS feeds was deleted by the author from everywhere and is, obviously, not supported. It's quite complex and buggy, but it works so far. The most recent version has some bugs fixed, but the caching is removed, what catastrophically degraded the performance, so I use one of the versions, that, actually, needs to be rewritten.

The bottom line is - currently I tweak only what can be easily tweaked without digging into the details, and what will bring the maximum benefit. (BTW, I'm not even a programmer 😁 )
Reply
#29
There is no access to the RSS feed in either APIv1 or APIv2, but what might work is the following:

Program 1:
o- Grab the last x posts in the XML format
o- Convert the XML into the RSS XML
o- Store the results in memcache, indexed by the site name

Program 2:
o- Serve HTTP requests from the locallost
o- Return memcache entry according to the site name in the query

Then configure the RSS feed grabber (the current WP plugin) to use the Program 2 URL

The problem #1 - figure out how to convert the Tumblr XML response to an RSS feed.
Reply
#30
(01 Mar 2019, 17:54 )Like Ra Wrote: Good one! Thanks! The way do rip the entire tumblr blogs! (Unless there's an API limitation).

But for the automatic update I need to get access to the RSS. Or ... create a new wordpress plugin, what would require much more time 😁

Need to check the API...

It will only rip everything new. That's what the STATFILE is for. It stores the id of the latest downloaded post and the script stops there on the next run. For the first run, it's limited by MAX_POSTS
Reply




Contributors: brothermarcus (1) , essanym (2) , krinlyc (1) , Like Ra (18) , madjack (3) , occorics (8) , Tinker D (4)