You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,8 @@
1
1
# spidy Web Crawler
2
2
Spidy (/spˈɪdi/) is the simple, easy to use command line web crawler.<br>
3
-
Given a list of web links, it uses the Python [`lxml`](http://lxml.de/index.html) and [`requests`](http://docs.python-requests.org)libraries to query the webpages.<br>
4
-
Spidy then extracts all links from the DOM of the page and adds them to its list.<br>
5
-
It does this to infinity!
3
+
Given a list of web links, it uses the Python [`requests`](http://docs.python-requests.org)library to query the webpages.<br>
4
+
Spidy then uses [`lxml`](http://lxml.de/index.html) to extract all links from the page and adds them to its list.<br>
5
+
Pretty simple!
6
6
7
7
Developed by [rivermont](https://github.com/rivermont) (/rɪvɜːrmɒnt/) and [FalconWarriorr](https://github.com/Casillas-) (/fælcʌnraɪjɔːr/).<br>
8
8
Looking for technical documentation? Check out [docs.md](https://github.com/rivermont/spidy/blob/master/docs.md)
0 commit comments