Skip to content

Commit c5cc42e

Browse files
committed
Updated files.
1 parent 530495e commit c5cc42e

File tree

2 files changed

+3
-4
lines changed

2 files changed

+3
-4
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# spidy Web Crawler
22
Spidy (/spˈɪdi/) is the simple, easy to use command line web crawler.<br>
3-
Given a list of web links, it uses the Python [`lxml`](http://lxml.de/index.html) and [`requests`](http://docs.python-requests.org) libraries to query the webpages.<br>
4-
Spidy then extracts all links from the DOM of the page and adds them to its list.<br>
5-
It does this to infinity!
3+
Given a list of web links, it uses the Python [`requests`](http://docs.python-requests.org) library to query the webpages.<br>
4+
Spidy then uses [`lxml`](http://lxml.de/index.html) to extract all links from the page and adds them to its list.<br>
5+
Pretty simple!
66

77
Developed by [rivermont](https://github.com/rivermont) (/rɪvɜːrmɒnt/) and [FalconWarriorr](https://github.com/Casillas-) (/fælcʌnraɪjɔːr/).<br>
88
Looking for technical documentation? Check out [docs.md](https://github.com/rivermont/spidy/blob/master/docs.md)

crawler.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,6 @@ def write_log(message):
4848
# Import required libraries
4949
import requests
5050
import shutil
51-
import sys
5251
from lxml import html, etree
5352
from os import makedirs
5453
from winsound import Beep

0 commit comments

Comments
 (0)