Skip to main content

Overview

The Download Manager is the core scraping interface of UNS. It provides comprehensive control over chapter-by-chapter scraping operations with real-time progress monitoring, automatic EPUB generation, and intelligent bypass capabilities for protected websites.
The Download Manager uses Electron’s built-in Chromium browser to scrape pages like a real user, bypassing most bot protections including Cloudflare challenges.

Starting a Download

Required Information

1

Enter First Chapter URL

Paste the URL to the first chapter of the novel you want to download. The scraper will automatically follow “next chapter” links from this starting point.
https://example.com/novel/chapter-1
Make sure you’re linking to Chapter 1, not the novel’s table of contents page. The scraper needs to start from actual chapter content.
2

Set Novel Title

Enter the title exactly as you want it to appear in your EPUB file. This will also be used as the filename.
The title field is used for the generated EPUB filename, so avoid special characters that aren’t valid in filenames.

Optional Metadata

Author Name

Add the author’s name to embed proper metadata in your EPUB file.

Cover Image

Upload a custom cover image (JPG/PNG) or use the auto-fetched cover from Search results.

Workflow Example

// Frontend sends job to Electron main process
window.electronAPI.startScrape({
  job_id: crypto.randomUUID(),
  start_url: "https://site.com/chapter-1",
  novel_name: "My Novel",
  author: "Author Name",
  cover_data: base64ImageString,
  enable_cloudflare_bypass: false,
  sourceId: "detected-source"
});
  1. User fills in required fields (URL + Title)
  2. Click Start Download
  3. Electron opens hidden Chromium window
  4. Scraper extracts chapter content
  5. Data sent to Python backend for storage
  6. Process repeats until final chapter
  7. EPUB file generated automatically

Real-Time Monitoring

Statistics Panel

The right sidebar displays live download metrics:

Status

Active (green pulse) or Idle (gray)

Elapsed Time

Live timer showing MM:SS format

Chapters Scraped

Current count updates in real-time as each chapter is saved

Speed

Chapters per second average (e.g., 2.5/s)

Progress Bar

Once the total chapter count is detected, a visual progress bar shows:
// Progress calculation
const progress = (chaptersScraped / totalChapters) * 100;
The total chapter count is extracted from the source website during the first few chapters, so the progress bar may appear after a brief delay.

Console Output

The integrated console displays real-time logs from the scraping engine:
  • Info (gray) - General status updates
  • Success (green) - Successful operations
  • Error (red) - Failures or warnings
  • Progress (blue) - Chapter save confirmations
[14:32:15] 🚀 Initializing download engine...
[14:32:18] Saved Chapter 1: The Beginning
[14:32:21] Saved Chapter 2: First Steps
[14:32:24] Total chapters: 150

Advanced Options

Cloudflare Bypass Mode

Only enable Cloudflare bypass if you encounter anti-bot protection pages. This mode significantly slows down the scraping process.
When enabled, the scraper:
  1. Makes the browser window visible to the user
  2. Waits for manual interaction if a CAPTCHA appears
  3. Pauses between requests to mimic human behavior
  4. Uses extended timeouts for page loads
// Toggle Cloudflare bypass
setEnableCloudflareBypass(true);
If you see “Checking your browser…” or similar messages, enable this option and complete the challenge manually in the visible browser window.

Show/Hide Scraper Window

Click the External Link icon in the header to toggle the Chromium scraper window visibility:
  • Hidden (default): Runs in background for better performance
  • Visible: Watch the scraper navigate pages in real-time

Download Controls

Start Download

Begins scraping from the provided URL. Button disabled until required fields are filled.

Abort Download

Immediately stops the active scraping job. Partial progress is saved to History.

Clear Console

Removes all log entries and resets statistics display.

Delete Job

Removes the current job from history and clears all associated data.

Completion & Auto-Cleanup

When a download finishes:
  1. Status changes to COMPLETED
  2. Final EPUB file is generated via /api/finalize-epub
  3. Success message displayed in console
  4. After 2.5 seconds, form fields auto-clear
  5. Job marked as completed in History
// Auto-cleanup on completion
if (data.status === 'COMPLETED') {
  setTimeout(() => {
    setUrl('');
    setName('');
    setAuthor('');
    clearCover();
    setCurrentJobId(null);
  }, 2500);
}
Completed EPUB files are automatically saved to your Library and can be read immediately without reopening the app.

Resume Interrupted Downloads

If a download is interrupted:
  1. Go to History page
  2. Find the incomplete job
  3. Click Resume
  4. Scraping continues from the last saved chapter
The backend tracks progress in real-time, so you never lose scraped chapters even if the app crashes.

Keyboard Tips

Enter

Press Enter in any input field to start download (if required fields are filled)

Tab

Navigate between input fields efficiently

Troubleshooting

  • Ensure both URL and Title fields are filled
  • Check console for error messages
  • Verify the URL is accessible in a regular browser
  • Try enabling Cloudflare bypass if the site has bot protection
  • Open the scraper window to see what’s happening
  • The site might require manual CAPTCHA solving
  • Check if the website changed its HTML structure
  • Try a different provider from the Marketplace
  • Some websites have inconsistent “next chapter” links
  • Check the History page to see which chapters were saved
  • You may need to manually download missing chapters separately
  • Ensure Python backend is running (check console for backend errors)
  • Verify you have write permissions to the output directory
  • Try deleting the job and restarting the download

Next Steps

Troubleshooting

Common issues and solutions for downloads

Read EPUBs

Access your completed novels in the Library

Find Novels

Discover new content across multiple sources

Add Providers

Expand support for more websites