All notable changes to this project will be documented in this file. See Conventional Commits for commit guidelines.
3.16.0 (2026-02-06)
- adaptive-crawler: Fix persistence of rendering type detection results (#3368) (4abca8b)
- certain redirect responses change request methods (#3296) (efac644), closes #2586
- clean turbo cache and tsbuildinfo files in yarn clean (#3348) (6cd9456)
- core: ensure
maxCrawlDepthwarning is logged only once (#3337) (9d01334), closes #3336 - handle multiple
BasicCrawler.stop()calls correctly (#3324) (9c0580b), closes #3257 - impit-client: pause fromWeb stream to prevent early consumption (#3347) (72aacb4), closes #555
- more permissive accept (#3373) (d03af1b), closes #3242
- remove deprecation from
RequestQueueV1(#3341) (89309bc) - suppress info message for undefined maxRequestsPerCrawl (#3237) (f3d9a79)
- add
@crawlee/stagehandpackage for AI-powered browser automation (#3331) (a89cb5a), closes #3064 - Add a counter of in-flight rendering type detections (#3355) (565fc34)
- implements async iterators (#3352) (7f7a4ab), closes #3338
- make
handleCloudflareChallengemore configurable (#3247) (629daf8), closes #3127 - utils: add
discoverValidSitemapsutility (#3339) (29f52ed)
3.15.3 (2025-11-10)
- await retries inside
_timeoutAndRetry(#3206) (9c1cf6d), closes /github.com/apify/crawlee/pull/3188#discussion_r2410256271 - cli: support creating projects with names that start with a number (#3219) (3f37845), closes #3213
- use shared enqueue links wrapper in
AdaptivePlaywrightCrawler(#3188) (9569d19)
3.15.2 (2025-10-23)
- correctly apply
launchOptionswithuseIncognitoPages(#3181) (84a4b70), closes /github.com/apify/crawlee/issues/3173#issuecomment-3346728227 #3173 #3173 - enable
systemInfoV2by default (#3208) (617a343) - Re-export MemoryStorage from the crawlee package (#3200) (e183ea9)
3.15.1 (2025-09-26)
3.15.0 (2025-09-17)
ImpitHttpClientrespects the internalRequesttimeout (#3103) (a35376d)proxyUrlslist can containnull(#3142) (dc39cc2), closes #3136- don't fail
exportDatacalls on empty datasets (#3115) (298f170), closes #2734 - respect
maxCrawlDepthwith a custom enqueueLinkstransformRequestFunction(#3159) (e2ecb74)
- add
collectAllKeysoption forBasicCrawler.exportData(#3129) (2ddfc9c), closes #3007 - add
TandemRequestProviderfor combinedRequestListandRequestQueueusage (#2914) (4ca450f), closes #2499
3.14.1 (2025-08-05)
Note: Version bump only for package @crawlee/root
3.14.0 (2025-07-25)
- don't retire browsers with long-running
pre|postLaunchHooksprematurely (#3062) (681660e) - respect
excludeoption inenqueueLinksByClickingElements(#3058) (013eb02) - retry on blocked status codes in
HttpCrawler(#3060) (b5fcd79), closes /github.com/apify/crawlee/blob/f68d2a95d67cc6230122dc1a5226c57ca23d0ae7/packages/browser-crawler/src/internals/browser-crawler.ts#L481-L486 #3029 - validation of iterables when adding requests to the queue (#3091) (529a1dd), closes #3063
- add
maxCrawlDepthcrawler option (#3045) (0090df9), closes #2633 - Add llms.txt and llms-full.txt generator (#3061) (79f3ba5), closes #3046
3.13.10 (2025-07-09)
- call
onSkippedRequestforAdaptivePlaywrightCrawler.enqueueLinks(#3043) (fc23d34), closes #3026 #3039 - improve enqueueLinks
limitchecking (#3038) (2774124), closes #3037
3.13.9 (2025-06-27)
- Do not log 'malformed sitemap content' on network errors in
Sitemap.tryCommonNames(#3015) (64a090f), closes #2884 - Fix link filtering in enqueueLinks in AdaptivePlaywrightCrawler (#3021) (8a3b6f8), closes #2525
- Accept (Async)Iterables in
addRequestsmethods (#3013) (a4ab748), closes #2980 - Report links skipped because of various filter conditions (#3026) (5a867bc), closes #3016
3.13.8 (2025-06-16)
- Do not enqueue more links than what the crawler is capable of processing (#2990) (ea094c8), closes #2728
- Persist rendering type detection results in
AdaptivePlaywrightCrawler(#2987) (76431ba), closes #2899
- dataset: add collectAllKeys option for full CSV export (#2945) (#3007) (3b629da)
- support
KVS.listKeys()prefixandcollectionparameters (#3001) (5c4726d), closes #2974
3.13.7 (2025-06-06)
3.13.6 (2025-06-05)
- enable full cookie support for
ImpitHttpClient(#2991) (120f0a7) - ensure
PlaywrightGotoOptionswon't result inunknownwhen playwright is not installed (#2995) (93eba38), closes #2994 - extract only
bodyfromiframeelements (#2986) (c36166e), closes #2979
3.13.5 (2025-05-20)
- add
MinimumSpeedStreamandByteCounterStreamhelpers (#2970) (921c4ee) - Allow the AdaptivePlaywrightCrawler result comparator to signal an inconclusive result (#2975) (7ba8906)
3.13.4 (2025-05-14)
- core: respect
systemInfoV2in snapshotter (#2961) (4100eab), closes #2958 - core: use short timeouts for periodic
KVS.setRecordcalls (#2962) (d31d90e) - core: optimize request unlocking to get rid of unnecessary unlock calls (#2963) (a433037)
- social: extract emails from each text node separately (#2952) (799afc1)
3.13.3 (2025-05-05)
- await
_createPageForBrowserin browser pool (#2950) (27ba74b), closes #2789 - convert
[@apilink](https://github.com/apilink)to[@link](https://github.com/link)on build (#2949) (abe1dee), closes #2717 - disable default fingerprints in the Camoufox template (#2928) (6dffa00)
- Fix trailing slash removal in BrowserPool (#2921) (c1fc439), closes #2878
- Fix useState behavior in adaptive crawler (#2941) (5282381)
- Persist SitemapRequestList state periodically (#2923) (e6e7a9f), closes #2897
- respect
autoscaledPoolOptions.isTaskReadyFunctionoption (#2948) (fe2d206), closes #2922 - statistics: track actual request.retryCount in Statistics (#2940) (c9f7f54)
3.13.2 (2025-04-08)
3.13.1 (2025-04-07)
- don't double increment session usage count in
BrowserCrawler(#2908) (3107e55), closes #2851 - rename
RobotsFiletoRobotsTxtFile(#2913) (3160f71), closes #2910 - treat
406as other4xxstatus codes inHttpCrawler(#2907) (b0e6f6d), closes #2892
3.13.0 (2025-03-04)
- cheerio: don't decode HTML entities in
context.body(#2838) (32d6d0e), closes #2401 - install browser in the
camoufoxtemplate correctly (#2864) (a9d008c), closes #2863 - Make log message in RequestQueue.isFinished more accurate (#2848) (3d124ae)
- Simplified RequestQueueV2 implementation (#2775) (d1a094a), closes #2767 #2700
- Camoufox-based crawler template (#2842) (7f08de4), closes #2836
- improved cross platform metric collection (#2834) (e41b2f7), closes #2771
- playwright: add
handleCloudflareChallengehelper (#2865) (9a1725f) - remove old docker CI (#2831) (7f09d56)
- use native
impitstreaming (#2833) (af2fe23), closes #2756
3.12.2 (2025-01-27)
- core: type definition of Dataset.reduce (#2774) (59bc6d1), closes #2773
- destructure
CrawlerRunOptionsbefore passing them toaddRequests(#2803) (02a598c), closes #2802 - graceful
BasicCrawlertidy-up onCriticalError(#2817) (53331e8), closes #2807
impit-basedHttpClientimplementation (#2787) (61d7ffa)- add support for parsing comma-separated list environment variables (#2765) (4e50c47)
- stopping the crawlers gracefully with
BasicCrawler.stop()(#2792) (af2966f), closes #2777
3.12.1 (2024-12-04)
- log status message timeouts to debug level (55ee44a)
- social: support new URL formats for Facebook, YouTube and X (#2758) (4c95847), closes #525
3.12.0 (2024-11-04)
.trim()urls from pretty-printed sitemap.xml files (#2709) (802a6fe), closes #2698- ensure correct column order in CSV export (#2734) (b66784f), closes #2718
- ignore errors from iframe content extraction (#2714) (627e5c2), closes #2708
- update
fingerprintGeneratorOptionstypes (#2705) (fcb098d), closes #2703
3.11.5 (2024-10-04)
forefrontrequest fetching in RQv2 (#2689) (03951bd), closes #2669prolong-anddeleteRequestLockforefrontoption (#2690) (cba8da3), closes #2681 #2689 #2669- check
.isFinished()beforeRequestListreads (#2695) (6fa170f) - core: accept
UInt8ArrayinKVS.setValue()(#2682) (8ef0e60) - core: trigger
errorHandlerfor session errors (#2683) (7d72bcb), closes #2678 - decode special characters in proxy
usernameandpassword(#2696) (0f0fcc5) - http-crawler: avoid crashing when gotOptions.cache is on (#2686) (1106d3a)
- puppeteer: rename
ignoreHTTPSErrorstoacceptInsecureCertsto support v23 (#2684) (f3927e6) - respect
forefrontoption inMemoryStorage'sRequestQueue(#2681) (b0527f9), closes #2669
3.11.4 (2024-09-23)
SitemapRequestList.teardown()doesn't breakpersistStatecalls (#2673) (fb2c5cd), closes /github.com/apify/crawlee/blob/f3eb99d9fa9a7aa0ec1dcb9773e666a9ac14fb76/packages/core/src/storages/sitemap_request_list.ts#L446 #2672
3.11.3 (2024-09-03)
- improve
FACEBOOK_REGEXto match older style page URLs (#2650) (a005e69), closes #2216 - RequestQueueV2: reset recently handled cache too if the queue is pending for too long (#2656) (51a69bc)
3.11.2 (2024-08-28)
- RequestQueueV2: remove
inProgresscache, rely solely on locked states (#2601) (57fcb08) - use namespace imports for cheerio to be compatible with v1 (#2641) (f48296f)
- Use the correct mutex in memory storage RequestQueueClient (#2623) (2fa8a29)
globs®expsforSitemapRequestList(#2631) (b5fd3a9)- resilient sitemap loading (#2619) (1dd7660)
3.11.1 (2024-07-24)
3.11.0 (2024-07-09)
- add
iframeexpansion toparseWithCheerioin browsers (#2542) (328d085), closes #2507 - add
ignoreIframesopt-out from the Cheerio iframe expansion (#2562) (474a8dc) - Sitemap-based request list implementation (#2498) (7bf8f0b)
3.10.5 (2024-06-12)
- allow creating new adaptive crawler instance without any parameters (9b7f595)
- declare missing peer dependencies in
@crawlee/browserpackage (#2532) (3357c7f) - fix detection of HTTP site when using the
useStatein adaptive crawler (#2530) (7e195c1) - mark
context.request.loadedUrlandidas required inside the request handler (#2531) (2b54660)
3.10.4 (2024-06-11)
- add
waitForAllRequestsToBeAddedoption toenqueueLinkshelper (925546b), closes #2318 - add missing
useStateimplementation into crawling context (eec4a71) - make
crawler.logpublicly accessible (#2526) (3e9e665) - playwright: allow passing new context options in
launchOptionson type level (0519d40), closes #1849 - respect
crawler.logwhen creating child logger forStatistics(0a0d75d), closes #2412
3.10.3 (2024-06-07)
- adaptive-crawler: log only once for the committed request handler execution (#2524) (533bd3f)
- increase timeout for retiring inactive browsers (#2523) (195f176)
- respect implicit router when no
requestHandleris provided inAdaptiveCrawler(#2518) (31083aa) - revert the scaling steps back to 5% (5bf32f8)
- add
waitForSelectorcontext helper +parseWithCheerioin adaptive crawler (#2522) (6f88e73) - log desired concurrency in the default status message (9f0b796)
3.10.2 (2024-06-03)
- Autodetect sitemap filetype from content (#2497) (62a9f40), closes #2461
- improve fix for double extension in KVS with HTML files (#2505) (157927d), closes #2419
3.10.1 (2024-05-23)
- adjust
URL_NO_COMMAS_REGEXregexp to allow single character hostnames (#2492) (ec802e8), closes #2487 - investigate and temp fix for possible 0-concurrency bug in RQv2 (#2494) (4ebe820)
- provide URLs to the error snapshot (#2482) (7f64145), closes /github.com/apify/apify-sdk-js/blob/master/packages/apify/src/key_value_store.ts#L25
3.10.0 (2024-05-16)
EnqueueStrategy.Allerroring with links using unsupported protocols (#2389) (8db3908)- conversion between tough cookies and browser pool cookies (#2443) (74f73ab)
- fire local
SystemInfoevents every second (#2454) (1fa9a66) - use createSessionFunction when loading Session from persisted state (#2444) (3c56b4c)
- do not drop statistics on migration/resurrection/resume (#2462) (8ce7dd4)
- double tier decrement in tiered proxy (#2468) (3a8204b)
- Fixed double extension for screenshots (#2419) (e8b39c4), closes #1980
- malformed sitemap url when sitemap index child contains querystring (#2430) (e4cd41c)
- return true when robots.isAllowed returns undefined (#2439) (6f541f8), closes #2437
- sitemap
content-typecheck breaks oncontent-typeparameters (#2442) (db7d372)
- add
FileDownload"crawler" (#2435) (d73756b) - implement ErrorSnapshotter for error context capture (#2332) (e861dfd), closes #2280
- make
RequestQueuev2 the default queue, see more on Apify blog (#2390) (41ae8ab), closes #2388
- improve scaling based on memory (#2459) (2d5d443)
- optimize
RequestListmemory footprint (#2466) (12210bd) - optimize adding large amount of requests via
crawler.addRequests()(#2456) (6da86a8)
3.9.2 (2024-04-17)
- break up growing stack in
AutoscaledPool.notify(#2422) (6f2e6b0), closes #2421 - don't call
notifyinaddRequests()(#2425) (c4d5446), closes #2421
3.9.1 (2024-04-11)
3.9.0 (2024-04-10)
- include actual key in error message of KVS'
setValue(#2411) (9089bf1) - notify autoscaled pool about newly added requests (#2400) (a90177d)
- puppeteer: allow passing
networkidletowaitUntilingotoExtended(#2399) (5d0030d), closes #2398 - sitemaps support
application/xml(#2408) (cbcf47a)
createAdaptivePlaywrightRouterutility (#2415) (cee4778), closes #2407tieredProxyUrlsfor ProxyConfiguration (#2348) (5408c7f)- better
newUrlFunctionfor ProxyConfiguration (#2392) (330598b), closes #2348 #2065 - expand #shadow-root elements automatically in
parseWithCheeriohelper (#2396) (a05b3a9)
3.8.2 (2024-03-21)
- core: solve possible dead locks in
RequestQueueV2(#2376) (ffba095) - correctly report gzip decompression errors (#2368) (84a2f17)
- fix detection of older puppeteer versions (890669b), closes #2370
- puppeteer: improve detection of older versions (98d4e86)
- use 0 (number) instead of false as default for sessionRotationCount (#2372) (667a3e7)
- implement global storage access checking and use it to prevent unwanted side effects in adaptive crawler (#2371) (fb3b7da), closes #2364
3.8.1 (2024-02-22)
3.8.0 (2024-02-21)
createRequestsworks correctly withexclude(and nothing else) (#2321) (048db09)- declare missing dependencies on
csv-stringifyandfs-extra(#2326) (718959d), closes /github.com/redabacha/crawlee/blob/2f05ed22b203f688095300400bb0e6d03a03283c/.eslintrc.json#L50 - puppeteer: add 'process' to the browser bound methods (#2329) (2750ba6)
- puppeteer: replace
page.waitForTimeout()withsleep()(52d7219), closes #2335 - puppeteer: support
puppeteer@v22(#2337) (3cc360a)
KeyValueStore.recordExists()(#2339) (8507a65)- accessing crawler state, key-value store and named datasets via crawling context (#2283) (58dd5fc)
- adaptive playwright crawler (#2316) (8e4218a)
- add Sitemap.tryCommonNames to check well known sitemap locations (#2311) (85589f1), closes #2307
- ci: snapshot docs automatically on minor/major publish (#2344) (092f51e)
- core: add
userAgentparameter toRobotsFile.isAllowed()+RobotsFile.from()helper (#2338) (343c159) - Support plain-text sitemap files (sitemap.txt) (#2315) (0bee7da)
3.7.3 (2024-01-30)
- enqueueLinks: filter out empty/nullish globs (#2286) (84319b3)
- pass on an invisible CF turnstile (#2277) (d8734e7), closes #2256
3.7.2 (2024-01-09)
3.7.1 (2024-01-02)
3.7.0 (2023-12-21)
retryOnBlockeddoesn't override the blocked HTTP codes (#2243) (81672c3)- browser-pool: respect user options before assigning fingerprints (#2190) (f050776), closes #2164
- filter out empty globs (#2205) (41322ab), closes #2200
- make CLI work on Windows too with
--no-purge(#2244) (83f3179) - make SessionPool queue up getSession calls to prevent overruns (#2239) (0f5665c), closes #1667
- MemoryStorage: lock request JSON file when reading to support multiple process crawling (#2215) (eb84ce9)
- allow configuring crawler statistics (#2213) (9fd60e4), closes #1789
- check enqueue link strategy post redirect (#2238) (3c5f9d6), closes #2173
- log cause with
retryOnBlocked(#2252) (e19a773), closes #2249 - robots.txt and sitemap.xml utils (#2214) (fdfec4f), closes #2187
3.6.2 (2023-11-26)
3.6.1 (2023-11-15)
- ts: ignore import errors for
got-scraping(012fc9e) - ts: specify type explicitly for logger (aec3550)
3.6.0 (2023-11-15)
- add
skipNavigationoption toenqueueLinks(#2153) (118515d) - BrowserPool: ignore
--no-sandboxflag for webkit launcher (#2148) (1eb2f08), closes #1797 - core: respect some advanced options for
RequestList.open()+ improve docs (#2158) (c5a1b07) - declare missing dependency on got-scraping in the core package (cd2fd4d)
- e2e cheerio-throw-on-ssl-errors (#2154) (f2d333d)
- provide more detailed error messages for browser launch errors (#2157) (f188ebe)
- retry incorrect Content-Type when response has blocked status code (#2176) (b54fb8b), closes #1994
3.5.8 (2023-10-17)
- MemoryStorage: ignore invalid files for request queues (#2132) (fa58581), closes #1985
- refactor
extractUrlsto split the text line by line first (#2122) (7265cd7)
3.5.7 (2023-10-05)
- add warning when we detect use of RL and RQ, but RQ is not provided explicitly (#2115) (6fb1c55), closes #1773
- ensure the status message cannot stuck the crawler (#2114) (9034f08)
- RQ request count is consistent after migration (#2116) (9ab8c18), closes #1855 #1855
3.5.6 (2023-10-04)
- add incapsula iframe selector to the blocked list (#2111) (2b17d8a), closes apify/store-website-content-crawler#154
3.5.5 (2023-10-02)
- allow to use any version of puppeteer or playwright (#2102) (0cafceb), closes #2101
- session pool leaks memory on multiple crawler runs (#2083) (b96582a), closes #2074 #2031
- templates: install browsers on postinstall for playwright (#2104) (323768b)
- types: make return type of RequestProvider.open and RequestQueue(v2).open strict and accurate (#2096) (dfaddb9)
3.5.4 (2023-09-11)
- core: allow explicit calls to
purgeDefaultStorageto wipe the storage on each call (#2060) (4831f07) - various helpers opening KVS now respect Configuration (#2071) (59dbb16)
3.5.3 (2023-08-31)
- browser-pool: improve error handling when browser is not found (#2050) (282527f), closes #1459
- clean up
inProgresscache when delaying requests viasameDomainDelaySecs(#2045) (f63ccc0) - crawler instances with different StorageClients do not affect each other (#2056) (3f4c863)
- pin all internal dependencies (#2041) (d6f2b17), closes #2040
- respect current config when creating implicit
RequestQueueinstance (845141d), closes #2043
3.5.2 (2023-08-21)
- make the
Requestconstructor options typesafe (#2034) (75e7d65) - pin
@crawlee/*packages versions incrawleemetapackage (#2040) (61f91c7), closes /github.com/apify/crawlee/pull/2002#issuecomment-1680091061 - support
DELETErequests inHttpCrawler(#2039) (7ea5c41), closes #1658
3.5.1 (2023-08-16)
- add
Request.maxRetriesto theRequestOptionsinterface (#2024) (6433821) - log original error message on session rotation (#2022) (8a11ffb)
3.5.0 (2023-07-31)
- cleanup worker stuff from memory storage to fix
vitest(#2004) (d2e098c), closes #1999 - core: add requests from URL list (
requestsFromUrl) to the queue in batches (418fbf8), closes #1995 - core: support relative links in
enqueueLinksexplicitly provided viaurlsoption (#2014) (cbd9d08), closes #2005
- add
closeCookieModalscontext helper for Playwright and Puppeteer (#1927) (98d93bb) - add support for
sameDomainDelay(#2003) (e796883), closes #1993 - basic-crawler: allow configuring the automatic status message (#2001) (3eb4e4c)
- core: use
RequestQueue.addBatchedRequests()inenqueueLinkshelper (4d61ca9), closes #1995 - retire session on proxy error (#2002) (8c0928b), closes #1912
3.4.2 (2023-07-19)
- basic-crawler: limit
internalTimeoutMillisin addition torequestHandlerTimeoutMillis(#1981) (8122622), closes #1766
- core: add
RequestQueue.addRequestsBatched()that is non-blocking (#1996) (c85485d), closes #1995 - retryOnBlocked detects blocked webpage (#1956) (766fa9b)
3.4.1 (2023-07-13)
- http-crawler: replace
IncomingMessagewithPlainResponsefor context'sresponse(#1973) (2a1cc7f), closes #1964
3.4.0 (2023-06-12)
- respect
<base>when enqueuing (#1936) (aeef572) - stop lerna from overwriting the copy.ts results (#1946) (69bed40)
- add LinkeDOMCrawler (#1907) (1c69560), closes /github.com/apify/crawlee/pull/1890#issuecomment-1533271694
- infiniteScroll has maxScrollHeight limit (#1945) (44997bb)
3.3.3 (2023-05-31)
- MemoryStorage: handle EXDEV errors when purging storages (#1932) (e656050)
- set status message every 5 seconds and log it via debug level (#1918) (32aede6)
- add support for
requestsFromUrltoRequestQueue(#1917) (7f2557c) - core: add
Request.maxRetriesto allow overriding themaxRequestRetries(#1925) (c5592db)
3.3.2 (2023-05-11)
- MemoryStorage: cache requests in
RequestQueue(#1899) (063dcd1) - respect config object when creating
SessionPool(#1881) (db069df)
- allow running single crawler instance multiple times (#1844) (9e6eb1e), closes #765
- HttpCrawler: add
parseWithCheeriohelper toHttpCrawler(#1906) (ff5f76f) - router: allow inline router definition (#1877) (2d241c9)
- RQv2 memory storage support (#1874) (049486b)
- support alternate storage clients when opening storages (#1901) (661e550)
3.3.1 (2023-04-11)
- infiniteScroll() not working in Firefox (#1826) (4286c5d), closes #1821
- jsdom: add timeout to the window.load wait when
runScriptsare enabled (806de31) - jsdom: delay closing of the window and add some polyfills (2e81618)
- jsdom: use no-op
enqueueLinksin http crawlers when parsing fails (fd35270) - MemoryStorage: handling of readable streams for key-value stores when setting records (#1852) (a5ee37d), closes #1843
- start status message logger after the crawl actually starts (5d1df7a)
- status message - total requests (#1842) (710f734)
- Storage: queue up opening storages to prevent issues in concurrent calls (#1865) (044c740)
- templates: added missing '@types/node' peer dependency (#1860) (d37a7e2)
- try to detect stuck request queue and fix its state (#1837) (95a9f94)
- add
parseWithCheeriocontext helper to cheerio crawler (b336a73) - jsdom: add
parseWithCheeriocontext helper (c8f0796)
3.3.0 (2023-03-09)
- add
proxyUrltoDownloadListOfUrlsOptions(779be1e), closes #1780 - CheerioCrawler: pass ixXml down to response parser (#1807) (af7a5c4), closes #1794
- ignore invalid URLs in
enqueueLinksin browser crawlers (#1803) (5ac336c) - MemoryStorage: request queues race conditions causing crashes (#1806) (083a9db), closes #1792
- MemoryStorage: RequestQueue should respect
forefront(#1816) (b68e86a), closes #1787 - MemoryStorage: RequestQueue#handledRequestCount should update (#1817) (a775e4a), closes #1764
- add basic support for
setStatusMessage(#1790) (c318980) - core: add
excludeoption toenqueueLinks(#1786) (2e833dc), closes #1785 - move the status message implementation to Crawlee, noop in storage (#1808) (99c3fdc)
3.2.2 (2023-02-08)
3.2.1 (2023-02-07)
- add
QueueOperationInfoexport to the core package (5ec6c24)
3.2.0 (2023-02-07)
- allow
userDataoption inenqueueLinksByClickingElements(#1749) (736f85d), closes #1617 - clone
request.userDatawhen creating new request object (#1728) (222ef59), closes #1725 - Correctly compute
pendingRequestCountin request queue (#1765) (946535f), closes /github.com/apify/crawlee/blob/master/packages/memory-storage/src/resource-clients/request-queue.ts#L291-L298 - declare missing dependency on
tslib(27e96c8), closes #1747 - ensure CrawlingContext interface is inferred correctly in route handlers (aa84633)
- KeyValueStore: big buffers should not crash (#1734) (2f682f7), closes #1732 #1710
- memory-storage: dont fail when storage already purged (#1737) (8694027), closes #1736
- update playwright to 1.29.2 and make peer dep. less strict (#1735) (c654fcd), closes #1723
- utils: add missing dependency on
ow(bf0e03c), closes #1716
- add
forefrontoption to allenqueueLinksvariants (#1760) (a01459d), closes #1483 - enqueueLinks: add SameOrigin strategy and relax protocol matching for the other strategies (#1748) (4ba982a)
- MemoryStorage: read from fs if persistStorage is enabled, ram only otherwise (#1761) (e903980)
3.1.4 (2022-12-14)
- session.markBad() on requestHandler error (#1709) (e87eb1f), closes #1635 /github.com/apify/crawlee/blob/5ff04faa85c3a6b6f02cd58a91b46b80610d8ae6/packages/browser-crawler/src/internals/browser-crawler.ts#L524
3.1.3 (2022-12-07)
- always show error origin if inside the userland (#1677) (bbe9045)
- hideInternalConsole in JSDOMCrawler (#1707) (8975f90)
3.1.2 (2022-11-15)
- injectJQuery in context does not survive navs (#1661) (493a7cf)
- make router error message more helpful for undefined routes (#1678) (ab359d8)
- MemoryStorage: correctly respect the desc option (#1666) (b5f37f6)
- requestHandlerTimeout timing (#1660) (493ea0c)
- shallow clone browserPoolOptions before normalization (#1665) (22467ca)
- support headfull mode in playwright js project template (ea2e61b)
- support headfull mode in puppeteer js project template (e6aceb8)
3.1.1 (2022-11-07)
utils.playwright.blockRequestswarning message (#1632) (76549eb)- concurrency option override order (#1649) (7bbad03)
- handle non-error objects thrown gracefully (#1652) (c3a4e1a)
- mark session as bad on failed requests (#1647) (445ae43)
- support reloading of sessions with lots of retries (ebc89d2)
- fix type errors when
playwrightis not installed (#1637) (de9db0c) - upgrade to puppeteer@19.x (#1623) (ce36d6b)
- add static
setanduseStorageClientshortcuts toConfiguration(2e66fa2) - enable migration testing (#1583) (ee3a68f)
- playwright: disable animations when taking screenshots (#1601) (4e63034)
3.1.0 (2022-10-13)
- add overload for
KeyValueStore.getValuewith defaultValue (#1541) (e3cb509) - add retry attempts to methods in CLI (#1588) (9142e59)
- allow
labelinenqueueLinksByClickingElementsoptions (#1525) (18b7c25) - basic-crawler: handle
request.noRetryaftererrorHandler(#1542) (2a2040e) - build storage classes by using
thisinstead of the class (#1596) (2b14eb7) - correct some typing exports (#1527) (4a136e5)
- do not hide stack trace of (retried) Type/Syntax/ReferenceErrors (469b4b5)
- enqueueLinks: ensure the enqueue strategy is respected alongside user patterns (#1509) (2b0eeed)
- enqueueLinks: prevent useless request creations when filtering by user patterns (#1510) (cb8fe36)
- export
Cookiefromcrawleemetapackage (7b02ceb) - handle redirect cookies (#1521) (2f7fc7c)
- http-crawler: do not hang on POST without payload (#1546) (8c87390)
- remove undeclared dependency on core package from puppeteer utils (827ae60)
- support TypeScript 4.8 (#1507) (4c3a504)
- wait for persist state listeners to run when event manager closes (#1481) (aa550ed)
- add
Dataset.exportToValue(#1553) (acc6344) - add
Dataset.getData()shortcut (522ed6e) - add
utils.downloadListOfUrlsto crawlee metapackage (7b33b0a) - add
utils.parseOpenGraph()(#1555) (059f85e) - add
utils.playwright.compileScript(#1559) (2e14162) - add
utils.playwright.infiniteScroll(#1543) (60c8289), closes #1528 - add
utils.playwright.saveSnapshot(#1544) (a4ceef0) - add global
useStatehelper (#1551) (2b03177) - add static
Dataset.exportToValue(#1564) (a7c17d4) - allow disabling storage persistence (#1539) (f65e3c6)
- bump puppeteer support to 17.x (#1519) (b97a852)
- core: add
forefrontoption toenqueueLinkshelper (f8755b6), closes #1595 - don't close page before calling errorHandler (#1548) (1c8cd82)
- enqueue links by clicking for Playwright (#1545) (3d25ade)
- error tracker (#1467) (6bfe1ce)
- make the CLI download directly from GitHub (#1540) (3ff398a)
- router: add userdata generic to addHandler (#1547) (19cdf13)
- use JSON5 for
INPUT.jsonto support comments (#1538) (09133ff)
3.0.4 (2022-08-22)
- bump puppeteer support to 15.1
- key value stores emitting an error when multiple write promises ran in parallel (#1460) (f201cca)
- fix dockerfiles in project templates
3.0.3 (2022-08-11)
- add missing configuration to CheerioCrawler constructor (#1432)
- sendRequest types (#1445)
- respect
headlessoption in browser crawlers (#1455) - make
CheerioCrawlerOptionstype more loose (d871d8c) - improve dockerfiles and project templates (7c21a64)
- add
utils.playwright.blockRequests()(#1447) - http-crawler (#1440)
- prefer
/INPUT.jsonfiles forKeyValueStore.getInput()(#1453) - jsdom-crawler (#1451)
- add
RetryRequestError+ add error to the context for BC (#1443) - add
keepAliveto crawler options (#1452)
3.0.2 (2022-07-28)
- regression in resolving the base url for enqueue link filtering (1422)
- improve file saving on memory storage (1421)
- add
UserDatatype argument toCheerioCrawlingContextand related interfaces (1424) - always limit
desiredConcurrencyto the value ofmaxConcurrency(bcb689d) - wait for storage to finish before resolving
crawler.run()(9d62d56) - using explicitly typed router with
CheerioCrawler(07b7e69) - declare dependency on
owin@crawlee/cheeriopackage (be59f99) - use
crawlee@^3.0.0in the CLI templates (6426f22) - fix building projects with TS when puppeteer and playwright are not installed (1404)
- enqueueLinks should respect full URL of the current request for relative link resolution (1427)
- use
desiredConcurrency: 10as the default forCheerioCrawler(1428)
- feat: allow configuring what status codes will cause session retirement (1423)
- feat: add support for middlewares to the
Routerviausemethod (1431)
3.0.1 (2022-07-26)
- remove
JSONDatageneric type arg fromCheerioCrawlerin (#1402) - rename default storage folder to just
storagein (#1403) - remove trailing slash for proxyUrl in (#1405)
- run browser crawlers in headless mode by default in (#1409)
- rename interface
FailedRequestHandlertoErrorHandlerin (#1410) - ensure default route is not ignored in
CheerioCrawlerin (#1411) - add
headlessoption toBrowserCrawlerOptionsin (#1412) - processing custom cookies in (#1414)
- enqueue link not finding relative links if the checked page is redirected in (#1416)
- fix building projects with TS when puppeteer and playwright are not installed in (#1404)
- calling
enqueueLinksin browser crawler on page without any links in (385ca27) - improve error message when no default route provided in (04c3b6a)
- feat: add parseWithCheerio for puppeteer & playwright in (#1418)
3.0.0 (2022-07-13)
This section summarizes most of the breaking changes between Crawlee (v3) and Apify SDK (v2). Crawlee is the spiritual successor to Apify SDK, so we decided to keep the versioning and release Crawlee as v3.
Up until version 3 of apify, the package contained both scraping related tools and Apify platform related helper methods. With v3 we are splitting the whole project into two main parts:
- Crawlee, the new web-scraping library, available as
crawleepackage on NPM - Apify SDK, helpers for the Apify platform, available as
apifypackage on NPM
Moreover, the Crawlee library is published as several packages under @crawlee namespace:
@crawlee/core: the base for all the crawler implementations, also contains things likeRequest,RequestQueue,RequestListorDatasetclasses@crawlee/basic: exportsBasicCrawler@crawlee/cheerio: exportsCheerioCrawler@crawlee/browser: exportsBrowserCrawler(which is used for creating@crawlee/playwrightand@crawlee/puppeteer)@crawlee/playwright: exportsPlaywrightCrawler@crawlee/puppeteer: exportsPuppeteerCrawler@crawlee/memory-storage:@apify/storage-localalternative@crawlee/browser-pool: previouslybrowser-poolpackage@crawlee/utils: utility methods@crawlee/types: holds TS interfaces mainly about theStorageClient
As Crawlee is not yet released as
latest, we need to install from thenextdistribution tag!
Most of the Crawlee packages are extending and reexporting each other, so it's enough to install just the one you plan on using, e.g. @crawlee/playwright if you plan on using playwright - it already contains everything from the @crawlee/browser package, which includes everything from @crawlee/basic, which includes everything from @crawlee/core.
npm install crawlee@nextOr if all we need is cheerio support, we can install only @crawlee/cheerio
npm install @crawlee/cheerio@nextWhen using playwright or puppeteer, we still need to install those dependencies explicitly - this allows the users to be in control of which version will be used.
npm install crawlee@next playwright
# or npm install @crawlee/playwright@next playwrightAlternatively we can also use the crawlee meta-package which contains (re-exports) most of the @crawlee/* packages, and therefore contains all the crawler classes.
Sometimes you might want to use some utility methods from
@crawlee/utils, so you might want to install that as well. This package contains some utilities that were previously available underApify.utils. Browser related utilities can be also found in the crawler packages (e.g.@crawlee/playwright).
Both Crawlee and Apify SDK are full TypeScript rewrite, so they include up-to-date types in the package. For your TypeScript crawlers we recommend using our predefined TypeScript configuration from @apify/tsconfig package. Don't forget to set the module and target to ES2022 or above to be able to use top level await.
The
@apify/tsconfigconfig hasnoImplicitAnyenabled, you might want to disable it during the initial development as it will cause build failures if you left some unused local variables in your code.
{
"extends": "@apify/tsconfig",
"compilerOptions": {
"module": "ES2022",
"target": "ES2022",
"outDir": "dist",
"lib": ["DOM"]
},
"include": [
"./src/**/*"
]
}For Dockerfile we recommend using multi-stage build, so you don't install the dev dependencies like TypeScript in your final image:
# using multistage build, as we need dev deps to build the TS source code
FROM apify/actor-node:16 AS builder
# copy all files, install all dependencies (including dev deps) and build the project
COPY . ./
RUN npm install --include=dev \
&& npm run build
# create final image
FROM apify/actor-node:16
# copy only necessary files
COPY --from=builder /usr/src/app/package*.json ./
COPY --from=builder /usr/src/app/README.md ./
COPY --from=builder /usr/src/app/dist ./dist
COPY --from=builder /usr/src/app/apify.json ./apify.json
COPY --from=builder /usr/src/app/INPUT_SCHEMA.json ./INPUT_SCHEMA.json
# install only prod deps
RUN npm --quiet set progress=false \
&& npm install --only=prod --no-optional \
&& echo "Installed NPM packages:" \
&& (npm list --only=prod --no-optional --all || true) \
&& echo "Node.js version:" \
&& node --version \
&& echo "NPM version:" \
&& npm --version
# run compiled code
CMD npm run start:prodPreviously we had a magical stealth option in the puppeteer crawler that enabled several tricks aiming to mimic the real users as much as possible. While this worked to a certain degree, we decided to replace it with generated browser fingerprints.
In case we don't want to have dynamic fingerprints, we can disable this behaviour via useFingerprints in browserPoolOptions:
const crawler = new PlaywrightCrawler({
browserPoolOptions: {
useFingerprints: false,
},
});Previously, if we wanted to get or add cookies for the session that would be used for the request, we had to call session.getPuppeteerCookies() or session.setPuppeteerCookies(). Since this method could be used for any of our crawlers, not just PuppeteerCrawler, the methods have been renamed to session.getCookies() and session.setCookies() respectively. Otherwise, their usage is exactly the same!
When we store some data or intermediate state (like the one RequestQueue holds), we now use @crawlee/memory-storage by default. It is an alternative to the @apify/storage-local, that stores the state inside memory (as opposed to SQLite database used by @apify/storage-local). While the state is stored in memory, it also dumps it to the file system, so we can observe it, as well as respects the existing data stored in KeyValueStore (e.g. the INPUT.json file).
When we want to run the crawler on Apify platform, we need to use Actor.init or Actor.main, which will automatically switch the storage client to ApifyClient when on the Apify platform.
We can still use the @apify/storage-local, to do it, first install it pass it to the Actor.init or Actor.main options:
@apify/storage-localv2.1.0+ is required for Crawlee
import { Actor } from 'apify';
import { ApifyStorageLocal } from '@apify/storage-local';
const storage = new ApifyStorageLocal(/* options like `enableWalMode` belong here */);
await Actor.init({ storage });Previously the state was preserved between local runs, and we had to use --purge argument of the apify-cli. With Crawlee, this is now the default behaviour, we purge the storage automatically on Actor.init/main call. We can opt out of it via purge: false in the Actor.init options.
Some options were renamed to better reflect what they do. We still support all the old parameter names too, but not at the TS level.
handleRequestFunction->requestHandlerhandlePageFunction->requestHandlerhandleRequestTimeoutSecs->requestHandlerTimeoutSecshandlePageTimeoutSecs->requestHandlerTimeoutSecsrequestTimeoutSecs->navigationTimeoutSecshandleFailedRequestFunction->failedRequestHandler
We also renamed the crawling context interfaces, so they follow the same convention and are more meaningful:
CheerioHandlePageInputs->CheerioCrawlingContextPlaywrightHandlePageFunction->PlaywrightCrawlingContextPuppeteerHandlePageFunction->PuppeteerCrawlingContext
Some utilities previously available under Apify.utils namespace are now moved to the crawling context and are context aware. This means they have some parameters automatically filled in from the context, like the current Request instance or current Page object, or the RequestQueue bound to the crawler.
One common helper that received more attention is the enqueueLinks. As mentioned above, it is context aware - we no longer need pass in the requestQueue or page arguments (or the cheerio handle $). In addition to that, it now offers 3 enqueuing strategies:
EnqueueStrategy.All('all'): Matches any URLs foundEnqueueStrategy.SameHostname('same-hostname') Matches any URLs that have the same subdomain as the base URL (default)EnqueueStrategy.SameDomain('same-domain') Matches any URLs that have the same domain name. For example,https://wow.an.example.comandhttps://example.comwill both be matched for a base url ofhttps://example.com.
This means we can even call enqueueLinks() without any parameters. By default, it will go through all the links found on current page and filter only those targeting the same subdomain.
Moreover, we can specify patterns the URL should match via globs:
const crawler = new PlaywrightCrawler({
async requestHandler({ enqueueLinks }) {
await enqueueLinks({
globs: ['https://apify.com/*/*'],
// we can also use `regexps` and `pseudoUrls` keys here
});
},
});All crawlers now have the RequestQueue instance automatically available via crawler.getRequestQueue() method. It will create the instance for you if it does not exist yet. This mean we no longer need to create the RequestQueue instance manually, and we can just use crawler.addRequests() method described underneath.
We can still create the
RequestQueueexplicitly, thecrawler.getRequestQueue()method will respect that and return the instance provided via crawler options.
We can now add multiple requests in batches. The newly added addRequests method will handle everything for us. It enqueues the first 1000 requests and resolves, while continuing with the rest in the background, again in a smaller 1000 items batches, so we don't fall into any API rate limits. This means the crawling will start almost immediately (within few seconds at most), something previously possible only with a combination of RequestQueue and RequestList.
// will resolve right after the initial batch of 1000 requests is added
const result = await crawler.addRequests([/* many requests, can be even millions */]);
// if we want to wait for all the requests to be added, we can await the `waitForAllRequestsToBeAdded` promise
await result.waitForAllRequestsToBeAdded;Previously an error thrown from inside request handler resulted in full error object being logged. With Crawlee, we log only the error message as a warning as long as we know the request will be retried. If you want to enable verbose logging like in v2, use the CRAWLEE_VERBOSE_LOG env var.
In v1 we replaced the underlying implementation of requestAsBrowser to be just a proxy over calling got-scraping - our custom extension to got that tries to mimic the real browsers as much as possible. With v3, we are removing the requestAsBrowser, encouraging the use of got-scraping directly.
For easier migration, we also added context.sendRequest() helper that allows processing the context bound Request object through got-scraping:
const crawler = new BasicCrawler({
async requestHandler({ sendRequest, log }) {
// we can use the options parameter to override gotScraping options
const res = await sendRequest({ responseType: 'json' });
log.info('received body', res.body);
},
});The useInsecureHttpParser option has been removed. It's permanently set to true in order to better mimic browsers' behavior.
Got Scraping automatically performs protocol negotiation, hence we removed the useHttp2 option. It's set to true - 100% of browsers nowadays are capable of HTTP/2 requests. Oh, more and more of the web is using it too!
In the requestAsBrowser approach, some of the options were named differently. Here's a list of renamed options:
This options represents the body to send. It could be a string or a Buffer. However, there is no payload option anymore. You need to use body instead. Or, if you wish to send JSON, json. Here's an example:
// Before:
await Apify.utils.requestAsBrowser({ …, payload: 'Hello, world!' });
await Apify.utils.requestAsBrowser({ …, payload: Buffer.from('c0ffe', 'hex') });
await Apify.utils.requestAsBrowser({ …, json: { hello: 'world' } });
// After:
await gotScraping({ …, body: 'Hello, world!' });
await gotScraping({ …, body: Buffer.from('c0ffe', 'hex') });
await gotScraping({ …, json: { hello: 'world' } });It has been renamed to https.rejectUnauthorized. By default, it's set to false for convenience. However, if you want to make sure the connection is secure, you can do the following:
// Before:
await Apify.utils.requestAsBrowser({ …, ignoreSslErrors: false });
// After:
await gotScraping({ …, https: { rejectUnauthorized: true } });Please note: the meanings are opposite! So we needed to invert the values as well.
useMobileVersion, languageCode and countryCode no longer exist. Instead, you need to use headerGeneratorOptions directly:
// Before:
await Apify.utils.requestAsBrowser({
…,
useMobileVersion: true,
languageCode: 'en',
countryCode: 'US',
});
// After:
await gotScraping({
…,
headerGeneratorOptions: {
devices: ['mobile'], // or ['desktop']
locales: ['en-US'],
},
});In order to set a timeout, use timeout.request (which is milliseconds now).
// Before:
await Apify.utils.requestAsBrowser({
…,
timeoutSecs: 30,
});
// After:
await gotScraping({
…,
timeout: {
request: 30 * 1000,
},
});throwOnHttpErrors → throwHttpErrors. This options throws on unsuccessful HTTP status codes, for example 404. By default, it's set to false.
decodeBody → decompress. This options decompresses the body. Defaults to true - please do not change this or websites will break (unless you know what you're doing!).
This function used to make the promise throw on specific responses, if it returned true. However, it wasn't that useful.
You probably want to cancel the request instead, which you can do in the following way:
const promise = gotScraping(…);
promise.on('request', request => {
// Please note this is not a Got Request instance, but a ClientRequest one.
// https://nodejs.org/api/http.html#class-httpclientrequest
if (request.protocol !== 'https:') {
// Unsecure request, abort.
promise.cancel();
// If you set `isStream` to `true`, please use `stream.destroy()` instead.
}
});
const response = await promise;Previously, you were able to have a browser pool that would mix Puppeteer and Playwright plugins (or even your own custom plugins if you've built any). As of this version, that is no longer allowed, and creating such a browser pool will cause an error to be thrown (it's expected that all plugins that will be used are of the same type).
One small feature worth mentioning is the ability to handle requests with browser crawlers outside the browser. To do that, we can use a combination of Request.skipNavigation and context.sendRequest().
Take a look at how to achieve this by checking out the Skipping navigation for certain requests example!
Crawlee exports the default log instance directly as a named export. We also have a scoped log instance provided in the crawling context - this one will log messages prefixed with the crawler name and should be preferred for logging inside the request handler.
const crawler = new CheerioCrawler({
async requestHandler({ log, request }) {
log.info(`Opened ${request.loadedUrl}`);
},
});Every crawler instance now has useState() method that will return a state object we can use. It will be automatically saved when persistState event occurs. The value is cached, so we can freely call this method multiple times and get the exact same reference. No need to worry about saving the value either, as it will happen automatically.
const crawler = new CheerioCrawler({
async requestHandler({ crawler }) {
const state = await crawler.useState({ foo: [] as number[] });
// just change the value, no need to care about saving it
state.foo.push(123);
},
});The Apify platform helpers can be now found in the Apify SDK (apify NPM package). It exports the Actor class that offers following static helpers:
ApifyClientshortcuts:addWebhook(),call(),callTask(),metamorph()- helpers for running on Apify platform:
init(),exit(),fail(),main(),isAtHome(),createProxyConfiguration() - storage support:
getInput(),getValue(),openDataset(),openKeyValueStore(),openRequestQueue(),pushData(),setValue() - events support:
on(),off() - other utilities:
getEnv(),newClient(),reboot()
Actor.main is now just a syntax sugar around calling Actor.init() at the beginning and Actor.exit() at the end (plus wrapping the user function in try/catch block). All those methods are async and should be awaited - with node 16 we can use the top level await for that. In other words, following is equivalent:
import { Actor } from 'apify';
await Actor.init();
// your code
await Actor.exit('Crawling finished!');import { Actor } from 'apify';
await Actor.main(async () => {
// your code
}, { statusMessage: 'Crawling finished!' });Actor.init() will conditionally set the storage implementation of Crawlee to the ApifyClient when running on the Apify platform, or keep the default (memory storage) implementation otherwise. It will also subscribe to the websocket events (or mimic them locally). Actor.exit() will handle the tear down and calls process.exit() to ensure our process won't hang indefinitely for some reason.
Apify SDK (v2) exports Apify.events, which is an EventEmitter instance. With Crawlee, the events are managed by EventManager class instead. We can either access it via Actor.eventManager getter, or use Actor.on and Actor.off shortcuts instead.
-Apify.events.on(...);
+Actor.on(...);We can also get the
EventManagerinstance viaConfiguration.getEventManager().
In addition to the existing events, we now have an exit event fired when calling Actor.exit() (which is called at the end of Actor.main()). This event allows you to gracefully shut down any resources when Actor.exit is called.
Apify.call()is now just a shortcut for runningApifyClient.actor(actorId).call(input, options), while also taking the token inside env vars into accountApify.callTask()is now just a shortcut for runningApifyClient.task(taskId).call(input, options), while also taking the token inside env vars into accountApify.metamorph()is now just a shortcut for runningApifyClient.task(taskId).metamorph(input, options), while also taking the ACTOR_RUN_ID inside env vars into accountApify.waitForRunToFinish()has been removed, useApifyClient.waitForFinish()insteadActor.main/initpurges the storage by default- remove
purgeLocalStoragehelper, move purging to the storage class directlyStorageClientinterface now has optionalpurgemethod- purging happens automatically via
Actor.init()(you can opt out viapurge: falsein the options ofinit/mainmethods)
QueueOperationInfo.requestis no longer availableRequest.handledAtis now string date in ISO formatRequest.inProgressandRequest.reclaimedare nowSets instead of POJOsinjectUnderscorefrom puppeteer utils has been removedAPIFY_MEMORY_MBYTESis no longer taken into account, useCRAWLEE_AVAILABLE_MEMORY_RATIOinstead- some
AutoscaledPooloptions are no longer available:cpuSnapshotIntervalSecsandmemorySnapshotIntervalSecshas been replaced with top levelsystemInfoIntervalMillisconfigurationmaxUsedCpuRatiohas been moved to the top level configuration
ProxyConfiguration.newUrlFunctioncan be async..newUrl()and.newProxyInfo()now return promises.prepareRequestFunctionandpostResponseFunctionoptions are removed, use navigation hooks insteadgotoFunctionandgotoTimeoutSecsare removed- removed compatibility fix for old/broken request queues with null
Requestprops fingerprintsOptionsrenamed tofingerprintOptions(fingerprints->fingerprint).fingerprintOptionsnow acceptuseFingerprintCacheandfingerprintCacheSize(instead ofuseFingerprintPerProxyCacheandfingerprintPerProxyCacheSize, which are now no longer available). This is because the cached fingerprints are no longer connected to proxy URLs but to sessions.
2.3.2 (2022-05-05)
- fix: use default user agent for playwright with chrome instead of the default "headless UA"
- fix: always hide webdriver of chrome browsers
2.3.1 (2022-05-03)
- fix:
utils.apifyClientearly instantiation (#1330) - feat:
utils.playwright.injectJQuery()(#1337) - feat: add
keyValueStoreoption toStatisticsclass (#1345) - fix: ensure failed req count is correct when using
RequestList(#1347) - fix: random puppeteer crawler (running in headful mode) failure (#1348)
This should help with the
We either navigate top level or have old version of the navigated framebug in puppeteer. - fix: allow returning falsy values in
RequestTransform's return type
2.3.0 (2022-04-07)
- feat: accept more social media patterns (#1286)
- feat: add multiple click support to
enqueueLinksByClickingElements(#1295) - feat: instance-scoped "global" configuration (#1315)
- feat: requestList accepts proxyConfiguration for requestsFromUrls (#1317)
- feat: update
playwrightto v1.20.2 - feat: update
puppeteerto v13.5.2We noticed that with this version of puppeteer actor run could crash with
We either navigate top level or have old version of the navigated frameerror (puppeteer issue here). It should not happen while running the browser in headless mode. In case you need to run the browser in headful mode (headless: false), we recommend pinning puppeteer version to10.4.0in actorpackage.jsonfile. - feat: stealth deprecation (#1314)
- feat: allow passing a stream to KeyValueStore.setRecord (#1325)
- fix: use correct apify-client instance for snapshotting (#1308)
- fix: automatically reset
RequestQueuestate after 5 minutes of inactivity, closes #997 - fix: improve guessing of chrome executable path on windows (#1294)
- fix: prune CPU snapshots locally (#1313)
- fix: improve browser launcher types (#1318)
This release should resolve the 0 concurrency bug by automatically resetting the
internal RequestQueue state after 5 minutes of inactivity.
We now track last activity done on a RequestQueue instance:
- added new request
- started processing a request (added to
inProgresscache) - marked request as handled
- reclaimed request
If we don't detect one of those actions in last 5 minutes, and we have some
requests in the inProgress cache, we try to reset the state. We can override
this limit via CRAWLEE_INTERNAL_TIMEOUT env var.
This should finally resolve the 0 concurrency bug, as it was always about
stuck requests in the inProgress cache.
2.2.2 (2022-02-14)
- fix: ensure
request.headersis set - fix: lower
RequestQueueAPI timeout to 30 seconds - improve logging for fetching next request and timeouts
2.2.1 (2022-01-03)
- fix: ignore requests that are no longer in progress (#1258)
- fix: do not use
tryCancel()from inside sync callback (#1265) - fix: revert to puppeteer 10.x (#1276)
- fix: wait when
bodyis not available ininfiniteScroll()from Puppeteer utils (#1238) - fix: expose logger classes on the
utils.loginstance (#1278)
2.2.0 (2021-12-17)
Up until now, browser crawlers used the same session (and therefore the same proxy) for
all request from a single browser * now get a new proxy for each session. This means
that with incognito pages, each page will get a new proxy, aligning the behaviour with
CheerioCrawler.
This feature is not enabled by default. To use it, we need to enable useIncognitoPages
flag under launchContext:
new Apify.Playwright({
launchContext: {
useIncognitoPages: true,
},
// ...
})Note that currently there is a performance overhead for using
useIncognitoPages. Use this flag at your own will.
We are planning to enable this feature by default in SDK v3.0.
Previously when a page function timed out, the task still kept running. This could lead to requests being processed multiple times. In v2.2 we now have abortable timeouts that will cancel the task as early as possible.
Several new timeouts were added to the task function, which should help mitigate the zero concurrency bug. Namely fetching of next request information and reclaiming failed requests back to the queue
are now executed with a timeout with 3 additional retries before the task fails. The timeout is always at least 300s (5 minutes), or requestHandlerTimeoutSecs if that value is higher.
- fix
RequestError: URI malformedin cheerio crawler (#1205) - only provide Cookie header if cookies are present (#1218)
- handle extra cases for
diffCookie(#1217) - add timeout for task function (#1234)
- implement proxy per page in browser crawlers (#1228)
- add fingerprinting support (#1243)
- implement abortable timeouts (#1245)
- add timeouts with retries to
runTaskFunction()(#1250) - automatically convert google spreadsheet URLs to CSV exports (#1255)
2.1.0 (2021-10-07)
- automatically convert google docs share urls to csv download ones in request list (#1174)
- use puppeteer emulating scrolls instead of
window.scrollBy(#1170) - warn if apify proxy is used in proxyUrls (#1173)
- fix
YOUTUBE_REGEX_STRINGbeing too greedy (#1171) - add
purgeLocalStorageutility method (#1187) - catch errors inside request interceptors (#1188, #1190)
- add support for cgroups v2 (#1177)
- fix incorrect offset in
fixUrlfunction (#1184) - support channel and user links in YouTube regex (#1178)
- fix: allow passing
requestsFromUrltoRequestListOptionsin TS (#1191) - allow passing
forceClouddown to the KV store (#1186), closes #752 - merge cookies from session with user provided ones (#1201), closes #1197
- use
ApifyClientv2 (full rewrite to TS)
2.0.7 (2021-09-08)
- Fix casting of int/bool environment variables (e.g.
APIFY_LOCAL_STORAGE_ENABLE_WAL_MODE), closes #956 - Fix incognito pages and user data dir (#1145)
- Add
@ts-ignorecomments to imports of optional peer dependencies (#1152) - Use config instance in
sdk.openSessionPool()(#1154) - Add a breaking callback to
infiniteScroll(#1140)
2.0.6 (2021-08-27)
- Fix deprecation messages logged from
ProxyConfigurationandCheerioCrawler. - Update
got-scrapingto receive multiple improvements.
2.0.5 (2021-08-24)
- Fix error handling in puppeteer crawler
2.0.4 (2021-08-23)
- Use
sessionTokenwithgot-scraping
2.0.3 (2021-08-20)
- BREAKING IN EDGE CASES * We removed
forceUrlEncodinginrequestAsBrowserbecause we found out that recent versions of the underlying HTTP clientgotalready encode URLs andforceUrlEncodingcould lead to weird behavior. We think of this as fixing a bug, so we're not bumping the major version. - Limit
handleRequestTimeoutMillisto max valid value to prevent Node.js fallback to1. - Use
got-scraping@^3.0.1 - Disable SSL validation on MITM proxie
- Limit
handleRequestTimeoutMillisto max valid value
2.0.2 (2021-08-12)
- Fix serialization issues in
CheerioCrawlercaused by parser conflicts in recent versions ofcheerio.
2.0.1 (2021-08-06)
- Use
got-scraping2.0.1 until fully compatible.
2.0.0 (2021-08-05)
- BREAKING: Require Node.js >=15.10.0 because HTTP2 support on lower Node.js versions is very buggy.
- BREAKING: Bump
cheerioto1.0.0-rc.10fromrc.3. There were breaking changes incheeriobetween the versions so this bump might be breaking for you as well. - Remove
LiveViewServerwhich was deprecated before release of SDK v1.