Like Ra's Naughty Forum

Full Version: Python script to download from erotic-hypnosis.com
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
It looks like the server at erotic-hypnosis.com is rate limiting downloading to 3 MB / minute, in 3 MB bursts.
If you have a very slow connection it's fine but if you have a fast connection your browser will download 3 MB and then timeout, marking the download as failed.

I made a python script that would log into the website and download the file.
You'll need to edit the script and add username, password and download url.
For the download url, you have to log in to the site, go to the downloads page, right click on the download button and copy the link.

Code:
import time
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
from bs4 import BeautifulSoup

email = "username or email"
password = "your password"

login_url = "https://erotic-hypnosis.com/my-account"
download_url = "the url of the file"

output_file = "my.mp3"

# Create a session to maintain cookies and authentication
session = requests.Session()

# Set up retries with backoff factor
retries = Retry(
    total=5,
    backoff_factor=1,
    status_forcelist=[429, 500, 502, 503, 504],
    method_whitelist=["HEAD", "GET", "OPTIONS", "POST"],
)
adapter = HTTPAdapter(max_retries=retries)
session.mount("http://", adapter)
session.mount("https://", adapter)

# Set the timeout for the session
session_timeout = 120  # Timeout in seconds
session.timeout = session_timeout

# Fetch the login page to retrieve the CSRF token
print("Fetching CSRF token...")
response = session.get(login_url)
soup = BeautifulSoup(response.text, "html.parser")
csrf_token = soup.find("input", {"name": "woocommerce-login-nonce"})["value"]

print("Logging in...")
# Log in to the WooCommerce site
login_data = {
    "username": email,
    "password": password,
    "woocommerce-login-nonce": csrf_token,
    "login": "Log in"
}
response = session.post(login_url, data=login_data, timeout=session_timeout)

if response.status_code == 200:
    print("Login successful")
else:
    print(f"Login failed with status code: {response.status_code}")
    exit(1)

print("Downloading file...")
# Download the MP3 file
response = session.get(download_url, stream=True, timeout=session_timeout)
response.raise_for_status()

# Limit download speed
chunk_size = 8192
bandwidth_limit = 3 * 1024 * 1024 / 60  # 3 MB/minute
delay = chunk_size / bandwidth_limit

bytes_downloaded = 0
# Save the downloaded file
with open(output_file, "wb") as f:
    for chunk in response.iter_content(chunk_size=chunk_size):
        f.write(chunk)
        bytes_downloaded += len(chunk)
        print(f"Downloaded {bytes_downloaded} bytes")
        time.sleep(delay)

print(f"File downloaded successfully as {output_file}")

If you don't have beautifulsoup and requests installed :
Code:
pip install beautifulsoup4
pip install requests
Wouldn't wget do the trick?
(08 Apr 2023, 23:24 )Like Ra Wrote: [ -> ]Wouldn't wget do the trick?

Nope. I aeady tried.
Also tried feeding wget the cookies, no joy.
I tried limiting the bandwidth available to firefox, no joy.
I tried setting the connection timeout to a few minutes in firefox config, no joy.
I tried all kinds of tricks.
I was about to recompile firefox to never mark a download as failed but then had the idea of just doing a python script.
(09 Apr 2023, 21:04 )cinon Wrote: [ -> ]I was about to recompile firefox to never mark a download as failed
Oh! That's ... far!

(09 Apr 2023, 21:04 )cinon Wrote: [ -> ]Also tried feeding wget the cookies, no joy.
Usually, "--continue" works in such cases.
(09 Apr 2023, 22:17 )Like Ra Wrote: [ -> ]
(09 Apr 2023, 21:04 )cinon Wrote: [ -> ]I was about to recompile firefox to never mark a download as failed
Oh! That's ... far!

(09 Apr 2023, 21:04 )cinon Wrote: [ -> ]Also tried feeding wget the cookies, no joy.
Usually, "--continue" works in such cases.

Tried that as well.