Thanks to visit codestin.com
Credit goes to github.com

Skip to content

lluriam19/SnapBot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 

Repository files navigation

What does the script do?

  • Automates screenshot captures: The script reads a file of URLs, iterates through each one, and takes a screenshot of every page using Chromium in headless mode.

  • Saves images with unique names: Screenshots are saved in an output directory and automatically named to avoid collisions by adding a timestamp to each file.

  • Closes processes if necessary: If the Chromium process keeps running after a certain time, the script terminates it to prevent orphan processes.

  • Optimization and control: The script efficiently manages captures, allows window size configurations, and suppresses unnecessary error messages.

Why is it useful?

  • Massive website capture: Ideal for web auditing projects, capture testing, or even generating visual reports.

  • Automation of repetitive tasks: Automates the capture process quickly and without manual intervention, saving time when collecting visual information from websites.

  • Configurability: The window size and output directory are easily adjustable to meet user requirements.

Technologies used:

  • Bash scripting
  • Chromium (headless mode)
  • Linux/Unix

This script is a valuable tool for cybersecurity professionals, web application testers, and anyone who needs to automate webpage screenshot capturing.

How to Use SnapBot

I will use Gobuster to explain how to use the tool.

My target will be a VM with the IP address 10.10.75.87, and the results discovered by Gobuster will be sent to a file called urls.txt:

image

After completing the exploration, the file will have output similar to the following:

image

The tool requires a well-defined URL structure, like the following, to perform the image captures inside a file called clean_urls.txt:

image

To achieve the structure above, we use the following command:

Syntax : awk '{print "ip"$1}' urls.txt > clean_map.txt or awk '{print "ip/dir"$1}' urls.txt > clean_map.txt

awk '{print "http://10.10.75.87/joomla"$1}' urls.txt > clean_map.txt

Now, it is just a matter of granting execution permissions and, of course, running the tool:

image

Once finished, a folder named screenshots will be created in the working directory, containing the screenshots:

image

image

image

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages