Skip to content

Commit fb7cf40

Browse files
authored
Merge branch 'master' into patch-2
2 parents 1b5bf99 + 166c027 commit fb7cf40

File tree

5 files changed

+1288
-0
lines changed

5 files changed

+1288
-0
lines changed
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
# Scrapeless
2+
3+
[Scrapeless](https://scrapeless.com) offers flexible and feature-rich data acquisition services with extensive parameter customization and multi-format export support.
4+
5+
## Installation and Setup
6+
7+
```bash
8+
pip install langchain-scrapeless
9+
```
10+
11+
You'll need to set up your Scrapeless API key:
12+
13+
```python
14+
import os
15+
os.environ["SCRAPELESS_API_KEY"] = "your-api-key"
16+
```
17+
18+
## Tools
19+
20+
The Scrapeless integration provides several tools:
21+
22+
- [ScrapelessDeepSerpGoogleSearchTool](/docs/integrations/tools/scrapeless_scraping_api) - Enables comprehensive extraction of Google SERP data across all result types.
23+
- [ScrapelessDeepSerpGoogleTrendsTool](/docs/integrations/tools/scrapeless_scraping_api) - Retrieves keyword trend data from Google, including popularity over time, regional interest, and related searches.
24+
- [ScrapelessUniversalScrapingTool](/docs/integrations/tools/scrapeless_universal_scraping) - Access and extract data from JS-Render websites that typically block bots.
25+
- [ScrapelessCrawlerCrawlTool](/docs/integrations/tools/scrapeless_crawl) - Crawl a website and its linked pages to extract comprehensive data.
26+
- [ScrapelessCrawlerScrapeTool](/docs/integrations/tools/scrapeless_crawl) - Extract information from a single webpage.

docs/docs/integrations/tools/scrapeless_crawl.ipynb

Lines changed: 446 additions & 0 deletions
Large diffs are not rendered by default.

docs/docs/integrations/tools/scrapeless_scraping_api.ipynb

Lines changed: 474 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 339 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,339 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "a6f91f20",
6+
"metadata": {},
7+
"source": [
8+
"# Scrapeless\n",
9+
"\n",
10+
"**Scrapeless** offers flexible and feature-rich data acquisition services with extensive parameter customization and multi-format export support. These capabilities empower LangChain to integrate and leverage external data more effectively. The core functional modules include:\n",
11+
"\n",
12+
"**DeepSerp**\n",
13+
"- **Google Search**: Enables comprehensive extraction of Google SERP data across all result types.\n",
14+
" - Supports selection of localized Google domains (e.g., `google.com`, `google.ad`) to retrieve region-specific search results.\n",
15+
" - Pagination supported for retrieving results beyond the first page.\n",
16+
" - Supports a search result filtering toggle to control whether to exclude duplicate or similar content.\n",
17+
"- **Google Trends**: Retrieves keyword trend data from Google, including popularity over time, regional interest, and related searches.\n",
18+
" - Supports multi-keyword comparison.\n",
19+
" - Supports multiple data types: `interest_over_time`, `interest_by_region`, `related_queries`, and `related_topics`.\n",
20+
" - Allows filtering by specific Google properties (Web, YouTube, News, Shopping) for source-specific trend analysis.\n",
21+
"\n",
22+
"**Universal Scraping**\n",
23+
"- Designed for modern, JavaScript-heavy websites, allowing dynamic content extraction.\n",
24+
" - Global premium proxy support for bypassing geo-restrictions and improving reliability.\n",
25+
"\n",
26+
"**Crawler**\n",
27+
"- **Crawl**: Recursively crawl a website and its linked pages to extract site-wide content.\n",
28+
" - Supports configurable crawl depth and scoped URL targeting.\n",
29+
"- **Scrape**: Extract content from a single webpage with high precision.\n",
30+
" - Supports \"main content only\" extraction to exclude ads, footers, and other non-essential elements.\n",
31+
" - Allows batch scraping of multiple standalone URLs.\n",
32+
"\n",
33+
"## Overview\n",
34+
"\n",
35+
"### Integration details\n",
36+
"\n",
37+
"| Class | Package | Serializable | JS support | Package latest |\n",
38+
"| :--- | :--- | :---: | :---: | :---: |\n",
39+
"| [ScrapelessUniversalScrapingTool](https://pypi.org/project/langchain-scrapeless/) | [langchain-scrapeless](https://pypi.org/project/langchain-scrapeless/) | ✅ | ❌ | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-scrapeless?style=flat-square&label=%20) |\n",
40+
"\n",
41+
"### Tool features\n",
42+
"\n",
43+
"|Native async|Returns artifact|Return data|\n",
44+
"|:-:|:-:|:-:|\n",
45+
"|✅|✅|html, markdown, links, metadata, structured content|\n",
46+
"\n",
47+
"\n",
48+
"## Setup\n",
49+
"\n",
50+
"The integration lives in the `langchain-scrapeless` package."
51+
]
52+
},
53+
{
54+
"cell_type": "raw",
55+
"id": "ca676665",
56+
"metadata": {
57+
"vscode": {
58+
"languageId": "raw"
59+
}
60+
},
61+
"source": [
62+
"!pip install langchain-scrapeless"
63+
]
64+
},
65+
{
66+
"cell_type": "markdown",
67+
"id": "b15e9266",
68+
"metadata": {},
69+
"source": [
70+
"### Credentials\n",
71+
"\n",
72+
"You'll need a Scrapeless API key to use this tool. You can set it as an environment variable:"
73+
]
74+
},
75+
{
76+
"cell_type": "code",
77+
"execution_count": null,
78+
"id": "e0b178a2-8816-40ca-b57c-ccdd86dde9c9",
79+
"metadata": {},
80+
"outputs": [],
81+
"source": [
82+
"import os\n",
83+
"\n",
84+
"os.environ[\"SCRAPELESS_API_KEY\"] = \"your-api-key\""
85+
]
86+
},
87+
{
88+
"cell_type": "markdown",
89+
"id": "1c97218f-f366-479d-8bf7-fe9f2f6df73f",
90+
"metadata": {},
91+
"source": [
92+
"## Instantiation\n",
93+
"\n",
94+
"Here we show how to instantiate an instance of the Scrapeless Universal Scraping Tool. This tool allows you to scrape any website using a headless browser with JavaScript rendering capabilities, customizable output types, and geo-specific proxy support.\n",
95+
"\n",
96+
"The tool accepts the following parameters during instantiation:\n",
97+
"- `url` (required, str): The URL of the website to scrape.\n",
98+
"- `headless` (optional, bool): Whether to use a headless browser. Default is True.\n",
99+
"- `js_render` (optional, bool): Whether to enable JavaScript rendering. Default is True.\n",
100+
"- `js_wait_until` (optional, str): Defines when to consider the JavaScript-rendered page ready. Default is `'domcontentloaded'`. Options include:\n",
101+
" - `load`: Wait until the page is fully loaded.\n",
102+
" - `domcontentloaded`: Wait until the DOM is fully loaded.\n",
103+
" - `networkidle0`: Wait until the network is idle.\n",
104+
" - `networkidle2`: Wait until the network is idle for 2 seconds.\n",
105+
"- `outputs` (optional, str): The specific type of data to extract from the page. Options include:\n",
106+
" - `phone_numbers`\n",
107+
" - `headings`\n",
108+
" - `images`\n",
109+
" - `audios`\n",
110+
" - `videos`\n",
111+
" - `links`\n",
112+
" - `menus`\n",
113+
" - `hashtags`\n",
114+
" - `emails`\n",
115+
" - `metadata`\n",
116+
" - `tables`\n",
117+
" - `favicon`\n",
118+
"- `response_type` (optional, str): Defines the format of the response. Default is `'html'`. Options include:\n",
119+
" - `html`: Return the raw HTML of the page.\n",
120+
" - `plaintext`: Return the plain text content.\n",
121+
" - `markdown`: Return a Markdown version of the page.\n",
122+
" - `png`: Return a PNG screenshot.\n",
123+
" - `jpeg`: Return a JPEG screenshot.\n",
124+
"- `response_image_full_page` (optional, bool): Whether to capture and return a full-page image when using screenshot output (png or jpeg). Default is False.\n",
125+
"- `selector` (optional, str): A specific CSS selector to scope scraping within a part of the page. Default is `None`.\n",
126+
"- `proxy_country` (optional, str): Two-letter country code for geo-specific proxy access (e.g., `'us'`, `'gb'`, `'de'`, `'jp'`). Default is `'ANY'`."
127+
]
128+
},
129+
{
130+
"cell_type": "markdown",
131+
"id": "74147a1a",
132+
"metadata": {},
133+
"source": [
134+
"## Invocation\n",
135+
"\n",
136+
"### Basic Usage"
137+
]
138+
},
139+
{
140+
"cell_type": "code",
141+
"execution_count": null,
142+
"id": "65310a8b-eb0c-4d9e-a618-4f4abe2414fc",
143+
"metadata": {},
144+
"outputs": [
145+
{
146+
"name": "stdout",
147+
"output_type": "stream",
148+
"text": [
149+
"<!DOCTYPE html><html><head>\n",
150+
" <title>Example Domain</title>\n",
151+
"\n",
152+
" <meta charset=\"utf-8\">\n",
153+
" <meta http-equiv=\"Content-type\" content=\"text/html; charset=utf-8\">\n",
154+
" <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n",
155+
" <style type=\"text/css\">\n",
156+
" body {\n",
157+
" background-color: #f0f0f2;\n",
158+
" margin: 0;\n",
159+
" padding: 0;\n",
160+
" font-family: -apple-system, system-ui, BlinkMacSystemFont, \"Segoe UI\", \"Open Sans\", \"Helvetica Neue\", Helvetica, Arial, sans-serif;\n",
161+
" \n",
162+
" }\n",
163+
" div {\n",
164+
" width: 600px;\n",
165+
" margin: 5em auto;\n",
166+
" padding: 2em;\n",
167+
" background-color: #fdfdff;\n",
168+
" border-radius: 0.5em;\n",
169+
" box-shadow: 2px 3px 7px 2px rgba(0,0,0,0.02);\n",
170+
" }\n",
171+
" a:link, a:visited {\n",
172+
" color: #38488f;\n",
173+
" text-decoration: none;\n",
174+
" }\n",
175+
" @media (max-width: 700px) {\n",
176+
" div {\n",
177+
" margin: 0 auto;\n",
178+
" width: auto;\n",
179+
" }\n",
180+
" }\n",
181+
" </style> \n",
182+
"</head>\n",
183+
"\n",
184+
"<body>\n",
185+
"<div>\n",
186+
" <h1>Example Domain</h1>\n",
187+
" <p>This domain is for use in illustrative examples in documents. You may use this\n",
188+
" domain in literature without prior coordination or asking for permission.</p>\n",
189+
" <p><a href=\"https://www.iana.org/domains/example\">More information...</a></p>\n",
190+
"</div>\n",
191+
"\n",
192+
"\n",
193+
"</body></html>\n"
194+
]
195+
}
196+
],
197+
"source": [
198+
"from langchain_scrapeless import ScrapelessUniversalScrapingTool\n",
199+
"\n",
200+
"tool = ScrapelessUniversalScrapingTool()\n",
201+
"\n",
202+
"# Basic usage\n",
203+
"result = tool.invoke(\"https://example.com\")\n",
204+
"print(result)"
205+
]
206+
},
207+
{
208+
"cell_type": "markdown",
209+
"id": "d6e73897",
210+
"metadata": {},
211+
"source": [
212+
"### Advanced Usage with Parameters"
213+
]
214+
},
215+
{
216+
"cell_type": "code",
217+
"execution_count": null,
218+
"id": "f90e33a7",
219+
"metadata": {},
220+
"outputs": [
221+
{
222+
"name": "stdout",
223+
"output_type": "stream",
224+
"text": [
225+
"# Well hello there.\n",
226+
"\n",
227+
"Welcome to exmaple.com.\n",
228+
"Chances are you got here by mistake (example.com, anyone?)\n"
229+
]
230+
}
231+
],
232+
"source": [
233+
"from langchain_scrapeless import ScrapelessUniversalScrapingTool\n",
234+
"\n",
235+
"tool = ScrapelessUniversalScrapingTool()\n",
236+
"\n",
237+
"result = tool.invoke({\"url\": \"https://exmaple.com\", \"response_type\": \"markdown\"})\n",
238+
"print(result)"
239+
]
240+
},
241+
{
242+
"cell_type": "markdown",
243+
"id": "659f9fbd-6fcf-445f-aa8c-72d8e60154bd",
244+
"metadata": {},
245+
"source": [
246+
"### Use within an agent"
247+
]
248+
},
249+
{
250+
"cell_type": "code",
251+
"execution_count": null,
252+
"id": "af3123ad-7a02-40e5-b58e-7d56e23e5830",
253+
"metadata": {},
254+
"outputs": [
255+
{
256+
"name": "stdout",
257+
"output_type": "stream",
258+
"text": [
259+
"================================\u001b[1m Human Message \u001b[0m=================================\n",
260+
"\n",
261+
"Use the scrapeless scraping tool to fetch https://www.scrapeless.com/en and extract the h1 tag.\n",
262+
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
263+
"Tool Calls:\n",
264+
" scrapeless_universal_scraping (call_jBrvMVL2ixhvf6gklhi7Gqtb)\n",
265+
" Call ID: call_jBrvMVL2ixhvf6gklhi7Gqtb\n",
266+
" Args:\n",
267+
" url: https://www.scrapeless.com/en\n",
268+
" outputs: headings\n",
269+
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
270+
"Name: scrapeless_universal_scraping\n",
271+
"\n",
272+
"{\"headings\":[\"Effortless Web Scraping Toolkitfor Business and Developers\",\"4.8\",\"4.5\",\"8.5\",\"A Flexible Toolkit for Accessing Public Web Data\",\"Deep SerpApi\",\"Scraping Browser\",\"Universal Scraping API\",\"Customized Services\",\"From Simple Data Scraping to Complex Anti-Bot Challenges, Scrapeless Has You Covered.\",\"Fully Compatible with Key Programming Languages and Tools\",\"Enterprise-level Data Scraping Solution\",\"Customized Data Scraping Solutions\",\"High Concurrency and High-Performance Scraping\",\"Data Cleaning and Transformation\",\"Real-Time Data Push and API Integration\",\"Data Security and Privacy Protection\",\"Enterprise-level SLA\",\"Why Scrapeless: Simplify Your Data Flow Effortlessly.\",\"Articles\",\"Organized Fresh Data\",\"Prices\",\"No need to hassle with browser maintenance\",\"Reviews\",\"Only pay for successful requests\",\"Products\",\"Fully scalable\",\"Unleash Your Competitive Edgein Data within the Industry\",\"Regulate Compliance for All Users\",\"Web Scraping Blog\",\"Scrapeless MCP Server Is Officially Live! Build Your Ultimate AI-Web Connector\",\"Product Updates | New Profile Feature\",\"How to Track Your Ranking on ChatGPT?\",\"For Scraping\",\"For Data\",\"For AI\",\"Top Scraper API\",\"Learning Center\",\"Legal\"]}\n",
273+
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
274+
"\n",
275+
"The h1 tag extracted from the website https://www.scrapeless.com/en is \"Effortless Web Scraping Toolkit for Business and Developers\".\n"
276+
]
277+
}
278+
],
279+
"source": [
280+
"from langchain_openai import ChatOpenAI\n",
281+
"from langchain_scrapeless import ScrapelessUniversalScrapingTool\n",
282+
"from langgraph.prebuilt import create_react_agent\n",
283+
"\n",
284+
"llm = ChatOpenAI()\n",
285+
"\n",
286+
"tool = ScrapelessUniversalScrapingTool()\n",
287+
"\n",
288+
"# Use the tool with an agent\n",
289+
"tools = [tool]\n",
290+
"agent = create_react_agent(llm, tools)\n",
291+
"\n",
292+
"for chunk in agent.stream(\n",
293+
" {\n",
294+
" \"messages\": [\n",
295+
" (\n",
296+
" \"human\",\n",
297+
" \"Use the scrapeless scraping tool to fetch https://www.scrapeless.com/en and extract the h1 tag.\",\n",
298+
" )\n",
299+
" ]\n",
300+
" },\n",
301+
" stream_mode=\"values\",\n",
302+
"):\n",
303+
" chunk[\"messages\"][-1].pretty_print()"
304+
]
305+
},
306+
{
307+
"cell_type": "markdown",
308+
"id": "4ac8146c",
309+
"metadata": {},
310+
"source": [
311+
"## API reference\n",
312+
"\n",
313+
"- [Scrapeless Documentation](https://docs.scrapeless.com/en/universal-scraping-api/quickstart/introduction/)\n",
314+
"- [Scrapeless API Reference](https://apidocs.scrapeless.com/api-12948840)"
315+
]
316+
}
317+
],
318+
"metadata": {
319+
"kernelspec": {
320+
"display_name": "langchain",
321+
"language": "python",
322+
"name": "python3"
323+
},
324+
"language_info": {
325+
"codemirror_mode": {
326+
"name": "ipython",
327+
"version": 3
328+
},
329+
"file_extension": ".py",
330+
"mimetype": "text/x-python",
331+
"name": "python",
332+
"nbconvert_exporter": "python",
333+
"pygments_lexer": "ipython3",
334+
"version": "3.12.11"
335+
}
336+
},
337+
"nbformat": 4,
338+
"nbformat_minor": 5
339+
}

libs/packages.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -716,4 +716,7 @@ packages:
716716
- name: toolbox-langchain
717717
repo: googleapis/mcp-toolbox-sdk-python
718718
path: packages/toolbox-langchain
719+
- name: langchain-scrapeless
720+
repo: scrapeless-ai/langchain-scrapeless
721+
path: .
719722

0 commit comments

Comments
 (0)