Servarr¶
The secret speed sauce here is adding webhooks to stuff to construct "callbacks". This lets manager->jellyfin->seerr run event-driven.
VM Notes¶
Configure to restart regularly. Easy way of making everything restart itself to clean up memory. It needs this.
PfSense VPN Routing¶
Go into PfSense and connect to a wireguard VPN (my case download a Mullvad VPN config). Then, go to Firewall->NAT->Outbound and set to manual Change the NAT Address for the subnet(s) you want to only run on your VPN to be the VPN's address instead of WAN Address
Requests Speedhacking¶
Force Radarr/Sonarr Progress Refresh¶
Script that will forcibly refresh Radarr and Sonarr download progress
@echo off
:loop
start /b curl -X POST "http://RADARRSERVERIP:8989/api/v3/command/" -H "Content-Type: application/json" -H "X-Api-Key: API_KEY_HERE" -d "{\"id\": 17361, \"name\": \"RefreshMonitoredDownloads\"}"
start /b curl -X POST "http://RADARRSERVERIP:8990/api/v3/command/" -H "Content-Type: application/json" -H "X-Api-Key: API_KEY_HERE" -d "{\"id\": 17361, \"name\": \"RefreshMonitoredDownloads\"}"
start /b curl -X POST "http://RADARRSERVERIP:7878/api/v3/command/" -H "Content-Type: application/json" -H "X-Api-Key: API_KEY_HERE" -d "{\"id\": 17361, \"name\": \"RefreshMonitoredDownloads\"}"
timeout /t 2 >nul
goto loop
Webhook Callbacks¶
Jellyfin -> Seerr¶
- In Jellyfin: Plugins -> Webhook
- Generic
- Name: Jellyseerr
- URL: http://JELLYSEERR_IP_AND_PORT/api/v1/settings/jobs/jellyfin-recently-added-scan/run
- Status: Enable
- Item Added
- Request Header: Key accept, Value application/json
- Request Header: Key X-Api-Key, Value JELLYSEERR_API_KEY_HERE
Radarr/Sonarr -> Jellyfin¶
-
In Radarr/Sonarr: Connect -> Webhook
- On Grab
- On File Import
- On File Upgrade
- On Import Complete
- On Series/Movie Add
- URL: http://127.0.0.1:5000/jellyfin
- Method: POST
-
Jellyfin: If Jellyfin gets another scan request before its finished its current one, it cancels the current one and starts over. If a ton of media is being added, this can effectively block anything from getting scanned until all media is done downloading. My solution is a custom endpoint (attached here as JellyfinLibraryScanManager.py) that adds a check if a scan is currently in progress. If it is, we queue up a scan event. If a scan event is already queued, we do nothing.
EDIT: script out of date atm. doesn't have single queue feature yet. compile with Pyinstaller --onefile. !
import http.server
import socketserver
import threading
import time
import json
import urllib.parse
import argparse
import sys
from http import HTTPStatus
import requests
_is_jellyfin_scan_running = False
_stop_event = threading.Event()
def build_jellyfin_api_url(address, api_target, api_key):
"""Constructs the full Jellyfin API URL with the API key."""
if not api_target.startswith('/'):
api_target = '/' + api_target
url = f"{address}{api_target}"
params = {'api_key': api_key}
return url + '?' + urllib.parse.urlencode(params)
def send_request(url, method='GET'):
headers = {'User-Agent': 'None', 'Content-Type': 'application/json'}
try:
if method.upper() == 'POST':
response = requests.post(url, headers=headers, data=json.dumps({}))
else:
response = requests.get(url, headers=headers)
if response.status_code == HTTPStatus.OK:
return response.text
else:
print(f"Request failed with status code {response.status_code} for URL: {url}")
return None
except requests.RequestException as e:
print(f"An error occurred during the request to {url}: {e}")
return None
def check_is_jellyfin_scan_running(config):
"""Continuously checks the status of the Jellyfin 'RefreshLibrary' scheduled task."""
jellyfin_url = build_jellyfin_api_url(config.address, "/scheduledtasks", config.apikey)
while not _stop_event.is_set():
time.sleep(config.scan_interval)
global _is_jellyfin_scan_running
tasks_response = send_request(jellyfin_url, method='GET')
if tasks_response:
try:
tasks_data = json.loads(tasks_response)
status = None
for task in tasks_data:
if task.get("Key") == "RefreshLibrary":
status = task.get("State")
break
print(f"Jellyfin scan status: {status}")
if status is None:
_is_jellyfin_scan_running = True
print("Warning: 'RefreshLibrary' task not found. Setting scan status to True.")
else:
status = status.lower()
_is_jellyfin_scan_running = "running" in status or "cancelling" in status
except json.JSONDecodeError:
print("Error: Could not parse Jellyfin response JSON.")
_is_jellyfin_scan_running = True
else:
_is_jellyfin_scan_running = True
print("Warning: Jellyfin API request failed. Setting scan status to True.")
class RequestHandler(http.server.SimpleHTTPRequestHandler):
def __init__(self, *args, **kwargs):
self.config = kwargs.pop('config')
super().__init__(*args, **kwargs)
def do_GET(self):
global _is_jellyfin_scan_running
if "jellyfin" in self.path.lower():
if _is_jellyfin_scan_running:
message = "Jellyfin is currently running a scan."
self.respond(message, HTTPStatus.OK)
else:
message = "Sending Jellyfin Library Update Scan Request"
self.respond(message, HTTPStatus.OK)
_is_jellyfin_scan_running = True
jellyfin_url = build_jellyfin_api_url(self.config.address, self.config.api_target, self.config.apikey)
threading.Thread(target=send_request, args=(jellyfin_url, 'POST')).start()
else:
message = "Endpoint not recognized."
self.respond(message, HTTPStatus.NOT_FOUND)
def do_POST(self):
global _is_jellyfin_scan_running
if "jellyfin" in self.path.lower():
message = "Jellyfin scan requested (POST accepted)."
self.respond(message, HTTPStatus.OK)
# 2. Trigger the Jellyfin scan in the background
# Note: We still set the status to True here to block concurrent GET requests
_is_jellyfin_scan_running = True
jellyfin_url = build_jellyfin_api_url(self.config.address, self.config.api_target, self.config.apikey)
threading.Thread(target=send_request, args=(jellyfin_url, 'POST')).start()
else:
message = "Endpoint not recognized."
self.respond(message, HTTPStatus.NOT_FOUND)
def respond(self, message, status_code):
"""Helper to send the HTTP response."""
self.send_response(status_code)
self.send_header('Content-type', 'text/plain')
self.end_headers()
self.wfile.write(message.encode('utf-8'))
def run_server(config):
"""wrapper function to pass the configuration to the RequestHandler."""
def handler_factory(*args, **kwargs):
return RequestHandler(*args, config=config, **kwargs)
with socketserver.TCPServer((config.host, config.port), handler_factory) as httpd:
print(f"Serving HTTP on {config.host} port {config.port}...")
httpd.serve_forever()
def main():
parser = argparse.ArgumentParser(
description="A proxy server to trigger a Jellyfin library scan only when no scan is running."
)
# Required arguments (single dash)
parser.add_argument(
'-a', '--address',
required=True,
help="The full base URL of the Jellyfin server (e.g., http://192.168.4.4:8096)"
)
parser.add_argument(
'-k', '--apikey',
required=True,
help="The API key for accessing the Jellyfin server (e.g., test)"
)
# Optional arguments with defaults (single dash)
parser.add_argument(
'-t', '--api-target',
type=str,
default="/library/refresh",
help="The Jellyfin API endpoint to call to trigger the scan. (Default: /library/refresh)"
)
parser.add_argument(
'-i', '--scan-interval',
type=int,
default=3,
help="The interval in seconds to check if a Jellyfin scan is running. (Default: 3000ms)"
)
parser.add_argument(
'-H', '--host',
type=str,
default="127.0.0.1",
help="The host IP address for the local proxy server to listen on. (Default: 127.0.0.1)"
)
parser.add_argument(
'-p', '--port',
type=int,
default=5000,
help="The port for the local proxy server to listen on. (Default: 5000)"
)
config = parser.parse_args()
print("\n--- Configuration ---")
print(f"Jellyfin Address: {config.address}")
print(f"Jellyfin Scan Target: {config.api_target}")
print(f"Proxy Host: {config.host}:{config.port}")
print(f"Scan Check Interval: {config.scan_interval}ms")
print("---------------------\n")
scan_thread = threading.Thread(target=check_is_jellyfin_scan_running, args=(config,), daemon=True)
try:
scan_thread.start()
time.sleep(config.scan_interval / 1000.0)
run_server(config)
except Exception as e:
print(f"An error occurred: {e}")
finally:
_stop_event.set()
if 'scan_thread' in locals() and scan_thread.is_alive():
scan_thread.join()
sys.exit(0)
if __name__ == "__main__":
main()
Seerr Build from Source¶
follow the build from source guide here but don't build it yet (make sure to use the guide for windows): https://docs.jellyseerr.dev/getting-started/buildfromsource modify in the code every value that is 15000, to 2000. These are hard-coded 15 second delays between download refreshes, its nice if they're 2 seconds instead. I used sublime text to find and replace all entries in every file in the source directory.
start automatically with a shortcut placed in the shell:startup folder
C:\Windows\System32\cmd.exe /k "cd /d C:\Users\Alchemy\Desktop\seerr && pnpm start
In the settings file for Seerr, change the cron job timer for download sync to 2 seconds like this (*/2 * * * * *). Its at the bottom of the file Enable proxy support in the UI Settings
Radarr/Sonarr AutoManualImport¶
99.999% of the time something fails to auto import after download because Radarr/Sonarr can't be absolutely sure its the right file, it is. To stop this happening I wrote this script:
EDIT: atow I don't have Radarr set up with it yet, works for Sonarr just fine though. !
from argparse import ArgumentParser
from time import sleep
from requests import HTTPError, post, Timeout, get, ConnectionError, exceptions
SERVER_URL = ""
API_KEY = ""
def set_manual_import_file_data(available_files):
prepared_files = []
# Construct payload
for f in available_files:
# Pull seriesId and episodeIds from the GET response
episode_ids = [ep['id'] for ep in f.get('episodes', [])]
series_id = f.get('series', {}).get('id', 0) # fallback 0 if missing
if series_id == 0 or not episode_ids:
# Skip files that are impossible to "auto" import
continue
prepared_files.append({
"path": f["path"],
"folderName": f.get("folderName", ""),
"seriesId": series_id,
"episodeIds": episode_ids,
"quality": f["quality"],
"languages": f.get("languages", [{"id": 1, "name": "English"}]),
"releaseGroup": f.get("releaseGroup", ""),
"indexerFlags": f.get("indexerFlags", 0),
"releaseType": f.get("releaseType", "singleEpisode"),
"downloadId": f.get("downloadId", "")
})
return prepared_files
def get_all_queue_items():
# set pageSize to 10000 as lazy way to not miss anything (download queue api is paginated)
queue_resp = get(f"{SERVER_URL}/api/v3/queue", params={"apikey": API_KEY, "pageSize": 10000})
queue_resp.raise_for_status()
return queue_resp.json()
def get_manual_import_data(download_id):
get_url = f"{SERVER_URL}/api/v3/manualimport"
get_params = {
"downloadId": download_id,
"filterExistingFiles": "false"
}
get_headers = {
"Accept": "*/*",
"Accept-Language": "en-US,en;q=0.7",
"Connection": "keep-alive",
"Referer": f"http://{SERVER_URL}/activity/queue",
"Sec-GPC": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36",
"X-Api-Key": API_KEY,
"X-Requested-With": "XMLHttpRequest"
}
resp = get(get_url, headers=get_headers, params=get_params, verify=False)
resp.raise_for_status()
return resp.json()
def send_manual_import_command_sonarr(prepared_files):
post_url = f"{SERVER_URL}/api/v3/command"
post_headers = {
"Accept": "application/json, text/javascript, */*; q=0.01",
"Accept-Language": "en-US,en;q=0.7",
"Connection": "keep-alive",
"Content-Type": "application/json",
"Referer": f"http://{SERVER_URL}/activity/queue",
"Origin": f"http://{SERVER_URL}",
"Sec-GPC": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36",
"X-Api-Key": API_KEY,
"X-Requested-With": "XMLHttpRequest"
}
payload = {
"name": "ManualImport",
"files": prepared_files,
"importMode": "auto"
}
return post(post_url, headers=post_headers, json=payload, verify=False)
def validate_servarr_connection():
try:
response = get(SERVER_URL, headers={"X-Api-Key": API_KEY}, timeout=5)
response.raise_for_status()
return response.status_code == 200
except HTTPError as e:
print(f"HTTP error from {SERVER_URL}: {e.response.status_code} - {e.response.reason}")
except ConnectionError:
print(f"Failed to connect to the server {SERVER_URL}")
except Timeout:
print(f"Request to server timed out {SERVER_URL}")
return False
def scan_radarr_server():
return
def scan_sonarr_server():
# lazy try-catch for everything so the tool doesn't crash and stop doing its job
try:
print(f"Fetching all queue items for {SERVER_URL}...")
queue = get_all_queue_items()
print(f"Found {len(queue.get('records', []))} queue items")
records = queue.get("records", [])
incomplete_downloads_count = 0
manually_imported_count = 0
for item in records:
if item.get("status") != "completed":
# print(f"Skipping non-completed item: {item.get('title')}")
incomplete_downloads_count += 1
continue
# Get available files for manual import from Sonarr
download_id = item.get("downloadId")
available_files = get_manual_import_data(download_id)
if not available_files:
print(f"No files available for manual import")
continue
print(f"Found {len(available_files)} file(s) available for import:")
for f in available_files:
print(f" - {f['path']}")
print(f" Episodes: {len(f.get('episodes', []))} episodes, IDs: {f.get('episodeIds', [])}")
print()
prepared_files = set_manual_import_file_data(available_files)
try:
post_resp = send_manual_import_command_sonarr(prepared_files)
print(f"POST status for downloadID {download_id}", post_resp.status_code)
if post_resp.status_code == 201:
manually_imported_count += 1
except exceptions.RequestException as e:
print(f"✗ Network error processing {download_id}: {e}")
except Exception as e:
print(f"✗ Unexpected error processing {download_id}: {e}")
print(f"Skipped {incomplete_downloads_count} items still downloading")
print(f"\"Manually\" imported {manually_imported_count} items")
except Exception as e:
print(e)
class HelpfulArgParser(ArgumentParser):
def error(self, message):
print(f"error: {message}\n")
self.print_help()
raise SystemExit(2)
def main():
# these globals should only be written to within here, so this is clean
global SERVER_URL
global API_KEY
parser = HelpfulArgParser(
description="A tool for automatically manually importing radarr/sonarr items stuck in the queue. currently only working with sonarr"
)
# Required flags with a single dash
parser.add_argument(
"-url",
required=True,
help="Server URL e.g. http://192.168.1.20:8989"
)
parser.add_argument(
"-server_type",
required=True,
help="\"sonarr\" or \"radarr\""
)
parser.add_argument(
"-api_key",
required=True,
help="API key for the server"
)
parser.add_argument(
"-refresh",
required=False,
default=10,
type=int,
help="Set refresh interval in seconds (default: 10)"
)
args = parser.parse_args()
SERVER_URL = args.url
API_KEY = args.api_key
if not validate_servarr_connection():
print("Failed to connect to the Radarr or Sonarr server. exiting.")
exit(1)
while True:
if args.server_type == "sonarr":
scan_sonarr_server()
elif args.server_type == "radarr":
scan_radarr_server()
sleep(args.refresh)
if __name__ == "__main__":
main()
Radarr/Sonarr¶
General housekeeping: Settings->General->Disable browser on start.
Sonarr notes:
- Download Clients -> you should use a category like it suggests.
Recyclarr¶
this program syncs trash guide configurations to sonarr/radarr instances. Its how I build my quality profiles for sorting download results to pick the "best" option.
here's how you apply a file. All configs and scripts are stored in these doc resources under "recyclarr".
recyclarr sync --config sonarr.yml
to make a new config recyclarr.exe config create --path filename.yml
Prowlarr¶
Indexers randomly timeout, and Prowlarr disables them for 12 hours. This is unconfigurable, so here's my fix:
!
import time
from datetime import datetime, timezone
import requests
import argparse
def parse_iso(dt):
if not dt:
return None
try:
if dt.endswith("Z"):
dt = dt.replace("Z", "+00:00")
return datetime.fromisoformat(dt)
except:
return None
def is_disabled(idx, now_utc):
# New condition: treat enable=False as disabled
if idx.get("enable") is False:
return True
status = idx.get("status", {})
disabled_till = parse_iso(status.get("disabledTill"))
return bool(disabled_till and disabled_till > now_utc)
def get_indexers(session, base):
r = session.get(f"{base}/api/v1/indexer", params={"includeStatus": "true"})
r.raise_for_status()
return r.json()
def fetch_indexer(session, base, idx_id):
r = session.get(f"{base}/api/v1/indexer/{idx_id}")
if r.status_code == 200:
return r.json()
return None
def reenable(session, base, idx):
idx_id = idx["id"]
detail = fetch_indexer(session, base, idx_id)
if not detail:
return False
detail["enable"] = True
detail.pop("status", None)
r = session.put(f"{base}/api/v1/indexer/{idx_id}", json=detail)
return r.status_code in (200, 202)
def test(session, base, idx):
idx_id = idx["id"]
for u in [
f"{base}/api/v1/indexer/{idx_id}/action/test",
f"{base}/api/v1/indexer/{idx_id}/test",
]:
r = session.post(u)
if r.status_code in (200, 201, 202, 204):
return True
return False
def main():
parser = argparse.ArgumentParser(description="Re-enable disabled Prowlarr indexers periodically")
parser.add_argument("-u", "--base-url", required=True, help="Prowlarr base URL")
parser.add_argument("-k", "--api-key", required=True, help="API key")
parser.add_argument("-i", "--interval", type=float, default=10, help="Check interval (seconds)")
args = parser.parse_args()
session = requests.Session()
session.headers.update({"X-Api-Key": args.api_key, "Accept": "application/json"})
base = args.base_url.rstrip("/")
print(f"Checking every {args.interval} seconds...")
try:
while True:
now = datetime.now(timezone.utc)
try:
indexers = get_indexers(session, base)
except Exception as e:
print(f"Fetch failed: {e}")
time.sleep(args.interval)
continue
disabled = [idx for idx in indexers if is_disabled(idx, now)]
if disabled:
print(f"[{datetime.now().isoformat(timespec='seconds')}] {len(disabled)} disabled indexer(s) found")
else:
print(f"[{datetime.now().isoformat(timespec='seconds')}] No disabled indexers")
for idx in disabled:
name = idx.get("name", idx["id"])
ok_enable = reenable(session, base, idx)
ok_test = test(session, base, idx)
print(f"Re-enable {'OK' if ok_enable else 'FAIL'}; Test {'OK' if ok_test else 'FAIL'} — {name}")
time.sleep(args.interval)
except KeyboardInterrupt:
print("\nStopping.")
if __name__ == "__main__":
main()
Prowlarr¶
Usenet:
- DrunkenSlug
- prio 15
- tags: main
- NZBGeek
- prio 15
- tags: main
- NZBFinder
- prio 20
- tags: main
- NZBPlanet
- prio 25
- tags: main
Torrent:
- Nyaa
- Prefer magnet
- Sonarr compat
- no filter
- all categories
- sort by size
- desc
- tags: flaresolverr, anime
Indexer Proxies¶
FlareSolverr¶
Gets around captchas. Simple as dirt to set up. I use it for nyaa. Set it up if you need it.
- tags: flaresolverr
- host: http://localhost:8191
Apps¶
Radarr
- full sync
- tags: main
- prowlarr server: http://localhost:9696
- radarr server: http://localhost:7878
- api key: ...
Sonarr
- full sync
- tags: main
- prowlarr server: http://localhost:9696
- radarr server: http://localhost:8989
- api key: ...
SonarrAnime
- full sync
- tags: anime
- prowlarr server: http://localhost:9696
- radarr server: http://localhost:8989
- api key: ...
Sabnzbd¶
Providers¶
I use newshosting and newsdemon atow. I have an active newsgroupninja account but I'm not going to renew it since newshosting is the backend for it.
Look up the latest usenet map for ideas on who to go with. I picked Newsdemon as a secondary since they're not on the Omicron backend.
The black friday sale window is the absolute best time to purchase a provider. The sales are very good.
Make sure you punch in the max # connections the provider gives you into sabnzbd so you're running at full speed.
Tweaks¶
Simple enough to set up I won't cover it. Here are some tweaks under "special" to make stuff faster:
- downloader_sleep_time 1
- receive_threads 16
- direct_unpack_threads 6 (or more)
qBittorrent¶
download it, get it going. Here's how to set it up properly.
Connection:
- Uncheck use UPnP
- Global max conn 999
- Max conn per torr 999
BitTorrent
- Disable Local Peer Discovery
- Enable anonymous mode
WebUI
Enable it and set authentication to whatever you want.
How to not Fuck It Up¶
-
Use a router level VPN AND a desktop VPN
-
in qbittorrent, in advanced, set Netowrk interface to THE VPN CLIENT INTERFACE. You can leave Optional IP bind to "all".
Advanced
- Memory priority: normal
- Refresh interval: 250
- Async I/O threads: 20
- File pool: 1000
- Outstanding memory: 1024MB
- Coalesce rw: Check