![]() |
Asian Bondage Website - Printable Version +- Like Ra's Naughty Forum (https://www.likera.com/forum/mybb) +-- Forum: Technical section (https://www.likera.com/forum/mybb/Forum-Technical-section) +--- Forum: Various technical topics (https://www.likera.com/forum/mybb/Forum-Various-technical-topics) +--- Thread: Asian Bondage Website (/Thread-Asian-Bondage-Website) |
Asian Bondage Website - dhf7b8g - 23 Jul 2025 Continuing on from my posts in Bondage photos and videos. The download script is quite simple in theory as the general outline for each URL is:
The only requirement for this is having a VPN provider which allows "unlimited" connections (although this is almost certainly breaking a fair use clause somewhere) There are two (soon to be three) main parts to the script. The first part is the "VPN Manager" as it handles the vpn connections themselves. This part of the script spins up a docker container with wireguard and a HTTP proxy. It picks the VPN server based on some time criteria (due to the 24 hour limit on downloads) and selects ports for the HTTP proxy in sequential order The second part is the "Site Interaction" as it is the only part which directly interfaces with the download site. This part of the script basically only exists to generate the download link. It spins up an instance of selenium (which is using the previously started HTTP proxy) and uses that to generate the download link. The third part is the "Download Manager" which is not yet complete. This part of the script will handle any interactions with the real download manager (aria2c) As of right now it just sends the URL over but Ideally it should be able to spin up and down containers based on the state of the downloads. The current state of the script is not super great as it was just an initial proof of concept. If I give it a list of 10 URLs it will start 10 downloads at the same time assuming there are no issues with the VPN servers. However this is only if I have my max concurrent at 10. If its less than 10 I start running into bugs with subsequent downloads not actually starting after previous downloads have finished. Issues with the download link generation aren't handled very gracefully either. Ideally this script will end up fully automated and will only require a url text file and a few launch args but I have some improvements I want to make first.
After I scraped the website I did work on another project which allowed me to quickly mark videos as wanted/not wanted using that scraped data so I might come back around to that to integrate it in the future. If there are any questions about the process that aren't answered then let me know and I can give some more details. Ill use this thread to post progress updates on the script and the release once its in a "user compatible" state 😊 |