// ==UserScript== // @name Eza's Tumblr Scrape // @namespace https://inkbunny.net/ezalias // @description Creates a new page showing just the images from any Tumblr // @license MIT // @license Public domain / No rights reserved // @include http://*?ezastumblrscrape* // @include https://*?ezastumblrscrape* // @include http://*/ezastumblrscrape* // @include http://*.tumblr.com/ // @include https://*.tumblr.com/ // @include http://*.tumblr.com/page/* // @include https://*.tumblr.com/page/* // @include http://*.tumblr.com/tagged/* // @include https://*.tumblr.com/tagged/* // @include http://*.tumblr.com/archive // @include http://*.co.vu/* // @exclude *imageshack.us* // @exclude *imageshack.com* // @grant GM_registerMenuCommand // @version 5.5 // @downloadURL none // ==/UserScript== // Create an imaginary page on the relevant Tumblr domain, mostly to avoid the ridiculous same-origin policy for public HTML pages. Populate page with all images from that Tumblr. Add links to this page on normal pages within the blog. // This script also works on off-site Tumblrs, by the way - just add /archive?ezastumblrscrape?scrapewholesite after the ".com" or whatever. Sorry it's not more concise. // Make it work, make it fast, make it pretty - in that order. // TODO: // I'll have to add filtering as some kind of text input... and could potentially do multi-tag filtering, if I can reliably identify posts and/or reliably match tag definitions to images and image sets. // This is a good feature for doing /scrapewholesite to get text links and then paging through them with fancy dynamic presentation nonsense. Also: duplicate elision. // I'd love to do some multi-scrape stuff, e.g. scraping both /tagged/homestuck and /tagged/art, but that requires some communication between divs to avoid constant repetition. // post-level detection would also be great because it'd let me filter out reblogs. fuck all these people with 1000-page tumblrs, shitty animated gifs in their theme, infinite scrolling, and NO FUCKING TAGS. looking at you, http://neuroticnick.tumblr.com/post/16618331343/oh-gamzee#dnr - you prick. // Look into Tumblr Saviour to see how they handle and filter out text posts. // Add a convenient interface for changing options? "Change browsing options" to unhide a div that lists every ?key=value pair, with text-entry boxes or radio buttons as appropriate, and a button that pushes a new URL into the address bar and re-hides the div. Would need to be separate from thumbnail toggle so long as anything false is suppressed in get_url or whatever. // Dropdown menus? Thumbnails yes/no, Pages At Once 1-20. These change the options_map settings immediately, so next/prev links will use them. Link to Apply Changes uses same ?startpage as current. // Could I generalize that the way I've generalized Image Glutton? E.g., grab all links from a Pixiv gallery page, show all images and all manga pages. // Possibly @include any ?scrapeeverythingdammit to grab all links and embed all pictures found on them. single-jump recursive web mirroring. (fucking same-domain policy!) // now that I've got key-value mapping, add a link for 'view original posts only (experimental).' er, 'hide reblogs?' difficult to accurately convey. // make it an element of the post-scraping function. then it would also work on scrape-whole-tumblr. // better yet: call it separately, then use the post-scraping function on each post-level chunk of HTML. i.e. call scrape_without_reblogs from scrape_whole_tumblr, split off each post into strings, and call soft_scrape_page( single_post_string ) to get all the same images. // or would it be better to get all images from any post? doing this by-post means we aren't getting theme nonsense (mostly). // maybe just exclude images where a link to another tumblr happens before the next image... no, text posts could screw that up. // general post detection is about recognizing patterns. can we automate it heuristically? bear in mind it'd be done at least once per scrape-page, and possibly once per tumblr-page. // user b84485 seems to be using the scrape-whole-site option to open image links in tabs, and so is annoyed by the 500/1280 duplicates. maybe a 'remove duplicates' button after the whole site's done? // It's a legitimately good idea. Lord knows I prefer opening images in tabs under most circumstances. // Basically I want a "Browse Links" page instead of just "grab everything that isn't nailed down." // http://mekacrap.tumblr.com/post/82151443664/oh-my-looks-like-theres-some-pussy-under#dnr - lots of 'read more' stuff, for when that's implemented. // eza's tumblr scrape: "read more" might be tumblr standard. // e.g.

Read More

// http://c-enpai.tumblr.com/ - interesting content visible in /archive, but every page is 'themed' to be a blank front page. wtf. // chokes on multi-thousand-page tumblrs like actual-vriska, at least when listing all pages. it's just link-heavy text. maybe skip having a div for every page and just append to one div. or skip divs and append to the raw document innerHTML. it could be a memory thing, if ajax elements are never destroyed. // multi-thousand-page tumblrs make "find image links from all pages" choke. massive memory use, massive CPU load. ridiculous. it's just text. (alright, it's links and ajax requests, but it's doggedly linear.) // maybe skip individual divs and append the raw pile-of-links hypertext into one div. or skip divs entirely and append it straight to the document innerHTML. // could it be a memory leak thing? are ajax elements getting properly released and destroyed when their scope ends? kind of ridiculous either way, considering we're holding just a few kilobytes of text per page. // try re-using the same ajax object. /* Assorted notes from another text file . eza's tumblr fixiv? de-style everything by simply erasing the "; document.body.innerHTML += css_block; // Has to go in all at once or the browser "helpfully" closes the style tag upon evaluation var mydiv = document.getElementById( "maindiv" ); // I apologize for the generic names. This script used to be a lot simpler. // Identify options in URL (in the form of ?key=value pairs) var key_value_array = window.location.href.split( '?' ); // Knowing how to do it the hard way is less impressive than knowing how not to do it the hard way. key_value_array.shift(); // The first element will be the site URL. Durrrr. for( dollarsign of key_value_array ) { // forEach( key_value_array ), including clumsy homage to $_ var this_pair = dollarsign.split( '=' ); // Split key=value into [key,value] (or sometimes just [key]) if( this_pair.length < 2 ) { this_pair.push( true ); } // If there's no value for this key, make its value boolean True if( this_pair[1] == "false " ) { this_pair[1] = false; } // If the value is the string "false" then make it False - note fun with 1-ordinal "length" and 0-ordinal array[element]. else if( !isNaN( parseInt( this_pair[1] ) ) ) { this_pair[1] = parseInt( this_pair[1] ); } // If the value string looks like a number, make it a number options_map[ this_pair[0] ] = this_pair[1]; // options_map.key = value } if( options_map.find[ options_map.find.length - 1 ] == "/" ) { options_map.find = options_map.find.substring( 0, options_map.find.length - 1 ); } // kludge - prevents example.tumblr.com//page/2 nonsense. (does this matter anymore?) if( options_map.thumbnails ) { document.body.className = "fixed-width"; } // CSS approach to thumbnail sizing; className="" to toggle back to auto. // Oh yeah, we have to do this -after- options_map.find is defined: site_and_tags = window.location.protocol + "//" + window.location.hostname + options_map.find; // e.g. http: + // + example.tumblr.com + /tagged/sherlock // Add tags to title, for archival and identification purposes document.title += options_map.find.split('/').join(' '); // E.g. /tagged/example/chrono -> "tagged example chrono" // In Chrome, /archive pages monkey-patch and overwrite Promise.all and Promise.resolve. // Clunky solution to clunky problem: grab the default property from a fresh iframe. // Big thanks to inu-no-policeman for the iframe-based solution. Prototypes were not helpful. var iframe = document.createElement( 'iframe' ); document.body.appendChild( iframe ); window['Promise'] = iframe.contentWindow['Promise']; document.body.removeChild( iframe ); // Go to image browser or link scraper according to URL options. mydiv.innerHTML = "Not all images are guaranteed to appear.
"; // Thanks to JS's wacky accomodating nature, mydiv is global despite appearing in an if-else block. if( options_map[ "scrapewholesite" ] ) { scrape_whole_tumblr(); // Images from every page, presented as text links } else if( options_map[ "everypost" ] ) { new_embedded_display(); // Images from every post on every page, from ten pages at once } else { scrape_tumblr_pages(); // Ten pages of embedded images at a time } } else { // If it's just a normal Tumblr page, add a link to the appropriate /ezastumblrscrape URL // Add link(s) to the standard "+Follow / Dashboard" nonsense. Before +Follow, I think - to avoid messing with users' muscle memory. // This is currently beyond my ability to dick with JS through a script in a plugin. Let's kludge it for immediate usability. // kludge by Ivan - http://userscripts-mirror.org/scripts/review/65725.html var url = window.location.protocol + "//" + window.location.hostname + "/archive?ezastumblrscrape?scrapewholesite?find=" + window.location.pathname; // Preserve /tagged/tag/chrono, etc. Also preserve http: vs https: via "location.protocol". if( url.indexOf( "/page/chrono" ) < 0 ) { // Basically checking for posts /tagged/page, thanks to Detective-Pony. Don't even ask. if( url.lastIndexOf( "/page/" ) > 0 ) { url = url.substring( 0, url.lastIndexOf( "/page/" ) ); } // Don't include e.g. /page/2. We'll add that ourselves. } // "Don't clean this up. It's not permanent." // Fuck it, it works and it's fragile. Just boost its z-index so it stops getting covered. var scrape_button = document.createElement("a"); scrape_button.setAttribute( "style", "position: absolute; top: 26px; right: 2px; padding: 2px 0 0; width: 50px; height: 18px; display: block; overflow: hidden; -moz-border-radius: 3px; background: #777; color: #fff; font-size: 8pt; text-decoration: none; font-weight: bold; text-align: center; line-height: 12pt; z-index: 100; " ); scrape_button.setAttribute("href", url); scrape_button.innerHTML = "Scrape"; var body_ref = document.getElementsByTagName("body")[0]; body_ref.appendChild(scrape_button); // Pages where the button gets split (i.e. clicking top half only redirects tiny corner iframe) are probably loading this script separately in the iframe. // Which means you'd need to redirect the window instead of just linking. Bluh. // Greasemonkey supports user commands through its add-on menu! Thus: no more manually typing /archive?ezastumblrscrape?scrapewholesite on uncooperative blogs. GM_registerMenuCommand( "Scrape whole Tumblr blog", go_to_scrapewholesite ); } function go_to_scrapewholesite() { let redirect = window.location.protocol + "//" + window.location.hostname + "/archive?ezastumblrscrape?scrapewholesite?find=" + window.location.pathname; window.location.href = redirect; } // ------------------------------------ Whole-site scraper for use with DownThemAll ------------------------------------ // // Monolithic scrape-whole-site function, recreating the original intent (before I added pages and made it a glorified multipage image browser) function scrape_whole_tumblr() { var highest_known_page = 0; // Link to image-viewing version, preserving current tags mydiv.innerHTML += "

Browse images (10 pages at once)

"; mydiv.innerHTML += "Browse images (1 page at once)
"; mydiv.innerHTML += "(Experimental fetch-every-post image browser)

"; // Find out how many pages we need to scrape. if( isNaN( options_map.lastpage ) ) { // Find upper bound in a small number of fetches. Ideally we'd skip this - some themes list e.g. "Page 1 of 24." I think that requires back-end cooperation. mydiv.innerHTML += "Finding out how many pages are in " + site_and_tags.substring( site_and_tags.indexOf( '/' ) + 2 ) + ":

"; // Returns page number if there's no Next link, or negative page number if there is a Next link. // Only for use on /mobile pages; relies on Tumblr's shitty standard theme function test_next_page( body ) { var link_index = body.indexOf( 'rel="canonical"' ); // var page_index = body.indexOf( '/page/', link_index ); var terminator_index = body.indexOf( '"', page_index ); var this_page = parseInt( body.substring( page_index+6, terminator_index ) ); if( body.indexOf( '>next<' ) > 0 ) { return -this_page; } else { return this_page } } // Generates an array of length "steps" between given boundaries - or near enough, for sanity's sake function array_between_bounds( lower_bound, upper_bound, steps ) { if( lower_bound > upper_bound ) { // Swap if out-of-order. var temp = lower_bound; lower_bound = upper_bound, upper_bound = temp; } var bound_range = upper_bound - lower_bound; if( steps > bound_range ) { steps = bound_range; } // Steps <= bound_range, but steps > 1 to avoid division by zero: var pages_per_test = parseInt( bound_range / steps ); // Steps-1 here, so first element is lower_bound & last is upper_bound. Off-by-one errors, whee... var range = Array( steps ) .fill( lower_bound ) .map( (value,index) => value += index * pages_per_test ); range.push( upper_bound ); return range; } // Given a (presumably sorted) list of page numbers, find the last that exists and the first that doesn't exist. function find_reasonable_bound( test_array ) { return Promise.all( test_array.map( pagenum => fetch( site_and_tags + '/page/' + pagenum + '/mobile' ) ) ) .then( responses => Promise.all( responses.map( response => response.text() ) ) ) .then( pages => pages.map( page => test_next_page( page ) ) ) .then( numbers => { var lower_index = -1; numbers.forEach( (value,index) => { if( value < 0 ) { lower_index++; } } ); // Count the negative numbers (i.e., count the pages with known content) if( lower_index < 0 ) { lower_index = 0; } var bounds = [ Math.abs(numbers[lower_index]), numbers[lower_index+1] ] mydiv.innerHTML += "Last page is between " + bounds[0] + " and " + bounds[1] + ".
"; return bounds; } ) } // Repeatedly narrow down how many pages we're talking about; find a reasonable "last" page find_reasonable_bound( [2, 10, 100, 1000, 10000, 100000] ) // Are we talking a couple pages, or a shitload of pages? .then( pair => find_reasonable_bound( array_between_bounds( pair[0], pair[1], 10 ) ) ) // Narrow it down. Fewer rounds of more fetches works best. .then( pair => find_reasonable_bound( array_between_bounds( pair[0], pair[1], 10 ) ) ) // Time is round count, fetches add up, selectivity is fetches x fetches. // Quit fine-tuning numbers and just conditional in some more testing for wide ranges. .then( pair => { if( pair[1] - pair[0] > 50 ) { return find_reasonable_bound( array_between_bounds( pair[0], pair[1], 10 ) ) } else { return pair; } } ) .then( pair => { if( pair[1] - pair[0] > 50 ) { return find_reasonable_bound( array_between_bounds( pair[0], pair[1], 10 ) ) } else { return pair; } } ) .then( pair => { options_map.lastpage = pair[1]; start_scraping_button(); } ); } else { // If we're given the highest page by the URL, just use that start_scraping_button(); } // Add "Scrape" button to the page. This will grab images and links from many pages and list them page-by-page. function start_scraping_button() { document.getElementById( 'browse' ).href += "?lastpage=" + options_map.lastpage; // Add last-page indicator to Browse Images link document.getElementById( 'browse1' ).href += "?lastpage=" + options_map.lastpage; // ... and the page-at-once link. document.getElementById( 'browse2' ).href += "?lastpage=" + options_map.lastpage; // ... and the fetch-every-post link. if( options_map.grabrange ) { // If we're only grabbing a 1000-page block from a huge-ass tumblr: mydiv.innerHTML += "
This will grab 1000 pages starting at " + options_map.grabrange + ".

"; } else { // If we really are describing the last page: mydiv.innerHTML += "
Last page is " + options_map.lastpage + " or lower.

"; } if( options_map.lastpage > 1500 && !options_map.grabrange ) { // If we need to link to 1000-page blocks, and aren't currently inside one: for( var x = 1; x < options_map.lastpage; x += 1000 ) { // For every 1000 pages... var decade_url = window.location.href + "?grabrange=" + x + "?lastpage=" + options_map.lastpage; mydiv.innerHTML += "Pages " + x + "-" + (x+999) + "
"; // ... link a range of 1000 pages. } } // Add button to scrape every page, one after another. // Buttons within GreaseMonkey are a huge pain in the ass. I stole this from stackoverflow.com/questions/6480082/ - thanks, Brock Adams. var button = document.createElement ('div'); button.innerHTML = ''; button.setAttribute ( 'id', 'scrape_button' ); // I'm really not sure why this id and the above HTML id aren't the same property. document.body.appendChild ( button ); // Add button (at the end is fine) document.getElementById ("myButton").addEventListener ( "click", scrape_all_pages, false ); // Activate button - when clicked, it triggers scrape_all_pages() if( options_map.autostart ) { document.getElementById ("myButton").click(); } // Getting tired of clicking on every reload - debug-ish } } function scrape_all_pages() { // Example code implies that this function /can/ take a parameter via the event listener, but I'm not sure how. var button = document.getElementById( "scrape_button" ); // First, remove the button. There's no reason it should be clickable twice. button.parentNode.removeChild( button ); // The DOM can only remove elements from a higher level. "Elements can't commit suicide, but infanticide is permitted." mydiv.innerHTML += "Scraping page:

"; // This makes it easier to view progress, // Create divs for all pages' content, allowing asynchronous AJAX fetches var x = 1; var div_end_page = options_map.lastpage; if( !isNaN( options_map.grabrange ) ) { // If grabbing 1000 pages from the middle of 10,000, don't create 0..10,000 divs x = options_map.grabrange; div_end_page = x + 1000; // Should be +999, but whatever, no harm in tiny overshoot } for( ; x <= div_end_page; x++ ) { var siteurl = site_and_tags + "/page/" + x; if( options_map.usemobile ) { siteurl += "/mobile"; } // If ?usemobile is flagged, scrape the mobile version. if( x == 1 && options_map.usemobile ) { siteurl = site_and_tags + "/mobile"; } // Hacky fix for redirect from example.tumblr.com/page/1/anything -> example.tumblr.com var new_div = document.createElement( 'div' ); new_div.id = '' + x; document.body.appendChild( new_div ); } // Fetch all pages with content on them var page_counter_div = document.getElementById( 'pagecounter' ); // Probably minor, but over thousands of laggy page updates, I'll take any optimization. pagecounter.innerHTML = "" + 1; var begin_page = 1; var end_page = options_map.lastpage; if( !isNaN( options_map.grabrange ) ) { // If a range is defined, grab only 1000 pages starting there begin_page = options_map.grabrange; end_page = options_map.grabrange + 999; // NOT plus 1000. Stop making that mistake. First page + 999 = 1000 total. if( end_page > options_map.lastpage ) { end_page = options_map.lastpage; } // Kludge document.title += " " + (parseInt( begin_page / 1000 ) + 1); // Change page title to indicate which block of pages we're saving } // Generate array of URL/pagenum pair-arrays url_index_array = new Array; for( var x = begin_page; x <= end_page; x++ ) { var siteurl = site_and_tags + "/page/" + x; if( options_map.usemobile ) { siteurl += "/mobile"; } // If ?usemobile is flagged, scrape the mobile version. No theme shenanigans... but also no photosets. Sigh. if( x == 1 && options_map.usemobile ) { siteurl = site_and_tags + "/mobile"; } // Hacky fix for redirect from example.tumblr.com/page/1/anything -> example.tumblr.com url_index_array.push( [siteurl, x] ); } // Fetch, scrape, and display all URLs. Uses promises to work in parallel and promise.all to limit speed and memory (mostly for reliability's sake). // Consider privileging first page with single-element fetch, to increase apparent responsiveness. Doherty threshold for frustration is 400ms. var simultaneous_fetches = 25; var chain = Promise.resolve(0); // Empty promise so we can use "then" var order_array = [1]; // We want to show the first page immediately, and this is a callback rat's-nest, so let's make an array of how many pages to take each round for( var x = 1; x < url_index_array.length; x += simultaneous_fetches ) { // E.g. [1, simultaneous_fetchs, s_f, s_f, s_f, whatever's left] if( url_index_array.length - x > simultaneous_fetches ) { order_array.push( simultaneous_fetches ); } else { order_array.push( url_index_array.length - x ); } } order_array.forEach( (how_many) => { chain = chain.then( s => { var subarray = url_index_array.splice( 0, how_many ); // Shift a reasonable number of elements into separate array, for partial array.map return Promise.all( subarray.map( page => Promise.all( [ fetch( page[0] ).then( s => s.text() ), page[1], page[0] ] ) // Return [ body of page, page number, page URL ] ) ) } ) .then( responses => responses.map( s => { // Scrape URLs for links and images, display on page var pagenum = s[1]; var page_url = s[2]; var url_array = soft_scrape_page_promise( s[0] ) // Surprise, this is a promise now .then( urls => { // Sort #link URLs to appear first, because we don't do that in soft-scrape anymore urls.sort( (a,b) => -a.indexOf( "#link" ) ); // Strings containing "#link" go before others - return +1 if not found in 'a.' Should be stable. // Print URLs so DownThemAll (or similar) can grab them var bulk_string = "
" + page_url + "
"; // A digest, so we can update innerHTML just once per div // DEBUG-ish - on theory that 1000-page-tall scraping/rendering fucks my VRAM if( options_map.smalltext ) { bulk_string = "

" + bulk_string; } // If ?smalltext flag is set, render text unusably small, for esoteric reasons urls.forEach( (value,index,array) => { if( options_map.plaintext ) { bulk_string += value + '
'; } else { bulk_string += '' + value + '
'; } } ) document.getElementById( '' + pagenum ).innerHTML = bulk_string; if( parseInt( pagecounter.innerHTML ) < pagenum ) { pagecounter.innerHTML = "" + pagenum; } // Increment pagecounter (where sensible) } ); } ) ) } ) chain = chain.then( s => { document.getElementById( 'afterpagecounter' ).innerHTML = "Done. Use DownThemAll (or a similar plugin) to grab all these links."; // DEBUG: divulge contents of page_dupe_hash to check for common tags // Ugh, I'm going to have to turn this from an associative array into an array-of-arrays if I want to sort it. let tag_overview = "
" + "Tag overview: " + "
"; let score_tag_list = new Array; // This will hold an array of arrays so we can sort this associative array by its values. Wheee. for( let url in page_dupe_hash ) { if( url.indexOf( '/tagged/' ) > 0 && page_dupe_hash[ url ] > 1 ) { // If it's a tag URL and NON-unique... score_tag_list.push( [ page_dupe_hash[ url ], url ] ); // ... store [ number of times seen, tag URL ] for sorting. } } score_tag_list.sort( (a,b) => a[0] < b[0] ); // Descending order - most common tags first score_tag_list.map( pair => { tag_overview += "
" + pair[0] + '\t' + "" + pair[1] + ""; } ) document.body.innerHTML += tag_overview; } ) } // ------------------------------------ Multi-page scraper with embedded images ------------------------------------ // function scrape_tumblr_pages() { // Grab an empty page so that duplicate-removal hides whatever junk is on every single page // This is DEBUG-ish. It might be slow, barring caching. It might not work due to asynchrony. It could block actual content thanks to 'my best posts' sidebars. exclude_content_example( site_and_tags + '/page/100000' ); if( isNaN( parseInt( options_map.startpage ) ) || options_map.startpage <= 1 ) { options_map.startpage = 1; } // Sanity check var next_link = options_url( "startpage", options_map.startpage + options_map.pagesatonce ); var prev_link = options_url( "startpage", options_map.startpage - options_map.pagesatonce ); var prev_next_controls = "
"; if( options_map.startpage > 1 ) { prev_next_controls += "<<< Previous - "; } prev_next_controls += "Next >>>

"; mydiv.innerHTML += prev_next_controls; document.getElementById("bottom_controls_div").innerHTML += prev_next_controls; // Link to the thumbnail page or full-size-image page as appropriate if( options_map.thumbnails ) { mydiv.innerHTML += "Switch to full-size images"; } else { mydiv.innerHTML += "Switch to thumbnails"; } // Toggle thumbnails via CSS (eventually, alter options_map accordingly) mydiv.innerHTML += " - Toggle image size"; if( options_map.pagesatonce == 1 ) { mydiv.innerHTML += " - Show ten pages at once"; } else { mydiv.innerHTML += " - Show one page at once"; } mydiv.innerHTML += " - Scrape whole Tumblr"; mydiv.innerHTML += " - (Experimental fetch-every-post image browser)
"; // Fill an array with the page URLs to be scraped (and create per-page divs while we're at it) var pages = new Array( parseInt( options_map.pagesatonce ) ) .fill( parseInt( options_map.startpage ) ) .map( (value,index) => value+index ); pages.forEach( pagenum => { mydiv.innerHTML += "


Page " + pagenum + "
"; } ) pages.map( pagenum => { var siteurl = site_and_tags + "/page/" + pagenum; // example.tumblr.com/page/startpage, startpage+1, startpage+2, etc. if( options_map.usemobile ) { siteurl += "/mobile"; } // If ?usemobile is flagged, scrape mobile version. No theme shenanigans... but also no photosets. Sigh. if( pagenum == 1 && options_map.usemobile ) { siteurl = site_and_tags + "/mobile"; } // Hacky fix for redirect from example.tumblr.com/page/1/anything -> example.tumblr.com fetch( siteurl ).then( response => response.text() ).then( text => { document.getElementById( pagenum ).innerHTML += "fetched
" // Immediately indicate the fetch happened. + "" + siteurl + "
"; // Link to page. Useful for viewing things in-situ... and debugging. // For some asinine reason, 'return url_array' causes 'Permission denied to access property "then".' So fake it with ugly nesting. soft_scrape_page_promise( text ) .then( url_array => { var div_digest = ""; // Instead of updating each div's HTML for every image, we'll lump it into one string and update the page once per div. var video_array = new Array; var outlink_array = new Array; var inlink_array = new Array; url_array.forEach( (value,index,array) => { // Shift videos and links to separate arrays, blank out those URLs in url_array if( value.indexOf( '#video' ) > 0 ) { video_array.push( value ); array[index] = '' } if( value.indexOf( '#offsite' ) > 0 ) { outlink_array.push( value ); array[index] = '' } if( value.indexOf( '#local' ) > 0 ) { inlink_array.push( value ); array[index] = '' } } ); url_array = url_array.filter( url => url === "" ? false : true ); // Remove empty elements from url_array // Display video links, if there are any video_array.forEach( value => {div_digest += "Video: " + value + "
"; } ) // Display page links if the ?showlinks flag is enabled if( options_map.showlinks ) { div_digest += "Outgoing links: "; outlink_array.forEach( (value,index) => { div_digest += "O" + (index+1) + " " } ); div_digest += "
" + "Same-Tumblr links: "; inlink_array.forEach( (value,index) => { div_digest += "T" + (index+1) + " " } ); div_digest += "
"; } // Embed high-res images to be seen, clicked, and saved url_array.forEach( image_url => { // This clunky function looks for a lower-res image if the high-res version doesn't exist. // Surprisingly, this does still matter. E.g. http://66.media.tumblr.com/ba99a55896a14a2e083cec076f159956/tumblr_inline_nyuc77wUR01ryfvr9_500.gif // This might mismatch _100 images and _250 links because of that self-erasing clause... but it's super rare, so meh. var on_error = 'if(this.src.indexOf("_1280")>0){this.src=this.src.replace("_1280","_500");}'; // Swap 1280 for 500 on_error += 'else if(this.src.indexOf("_500")>0){this.src=this.src.replace("_500","_400");}'; // Or swap 500 for 400 on_error += 'else if(this.src.indexOf("_400")>0){this.src=this.src.replace("_400","_250");}'; // Or swap 400 for 250 on_error += 'else{this.src=this.src.replace("_250","_100");this.onerror=null;}'; // Or swap 250 for 100, then give up on_error += 'document.getElementById("' + image_url + '").href=this.src;'; // Link the image to itself, regardless of size // Embed images (linked to themselves) and link to photosets if( image_url.indexOf( "#photoset#" ) > 0 ) { // Before the first image in a photoset, print the photoset link. var photoset_url = image_url.split( "#" ).pop(); // URL is like tumblr.com/image#photoset#http://tumblr.com/photoset_iframe - separate past last hash... t. div_digest += " Set:"; } div_digest += "" + "(Waiting for image) "; } ) div_digest += "
(End of " + siteurl + ")"; // Another link to the page, because I'm tired of scrolling back up. document.getElementById( pagenum ).innerHTML += div_digest; } ) // End of 'then( url_array => { } )' } ) // End of 'then( text => { } )' } ) // End of 'pages.map( pagenum => { } )' } // ------------------------------------ Post-by-post scraper with embedded images ------------------------------------ // // Scrape each page for /post/ links, scrape each /post/ for content, display in-order with less callback hell // New layout & new scrape method - not required to be compatible with previous functions function new_embedded_display() { // Grab an empty page so that duplicate-removal hides whatever junk is on every single page // This is DEBUG-ish. It might be slow, barring caching. It might not work due to asynchrony. It could block actual content thanks to 'my best posts' sidebars. exclude_content_example( site_and_tags + '/page/100000' ); if( isNaN( parseInt( options_map.startpage ) ) || options_map.startpage <= 1 ) { options_map.startpage = 1; } // "<<< Previous - Next >>>" var next_link = options_url( "startpage", options_map.startpage + options_map.pagesatonce ); var prev_link = options_url( "startpage", options_map.startpage - options_map.pagesatonce ); var prev_next_controls = "
"; if( options_map.startpage > 1 ) { prev_next_controls += "<<< Previous - "; } prev_next_controls += "Next >>>

"; mydiv.innerHTML += prev_next_controls; document.getElementById("bottom_controls_div").innerHTML += prev_next_controls; // Links out from this mode - scrapewholesite, original mode, maybe other crap mydiv.innerHTML += "This mode is under development and subject to change."; mydiv.innerHTML += " - Return to original image browser" + "
" + "
"; // "Pages 1 to 10 (of 100) from http://example.tumblr.com" mydiv.innerHTML += "Pages " + options_map.startpage + " to " + (options_map.startpage + options_map.pagesatonce - 1); if( !isNaN(options_map.lastpage) ) { mydiv.innerHTML += " (of " + options_map.lastpage + ")"; } mydiv.innerHTML += " from " + site_and_tags + "
"; // Image size options via CSS mydiv.innerHTML += "Original image sizes - "; mydiv.innerHTML += "Snap columns - "; mydiv.innerHTML += "Snap rows - "; mydiv.innerHTML += "Fit width - "; mydiv.innerHTML += "Fit height - "; mydiv.innerHTML += "Fit both

"; // Messy inline function for toggling page breaks - they're optional because we have post permalinks now mydiv.innerHTML += "Toggle page breaks

"; mydiv.innerHTML += ""; // Empty span for things to be placed after. posts_placed.push( 0 ); // Because fuck special cases. // Scrape some pages for( let x = options_map.startpage; x < options_map.startpage + options_map.pagesatonce; x++ ) { fetch( site_and_tags + "/page/" + x ).then( r => r.text() ).then( text => { scrape_by_posts( text, x ); } ) } } // Take the HTML from a /page, fetch the /post links, display images // Probably ought to be despaghettified and combined with the above function, but I was fighting callback hell -hard- after the last major version // Alternately, split it even further and do some .then( do_this ).then( do_that ) kinda stuff above. function scrape_by_posts( html_copy, page_number ) { let posts = links_from_page( html_copy ); // Get links on page posts = posts.filter( link => { return link.indexOf( '/post/' ) > 0 && link.indexOf( '/photoset' ) < 0; } ); // Keep /post links but not photoset iframes posts = posts.map( link => { return link.replace( '#notes', '' ); } ); // post/1234 is the same as /post/1234#notes posts = posts.filter( link => link.indexOf( window.location.host ) > 0 ); // Same-origin filter. Not necessary, but it unclutters the console. Fuckin' CORS. posts = remove_duplicates( posts ); // De-dupe // 'posts' now contains an array of /post URLs // Display link and linebreak before first post on this page let first_id = posts.map( u => parseInt( u.split( '/' )[4] ) ).sort( ).pop(); // Grab ID from its place in each URL, sort accordingly, take the top one let page_link = "
Page " + page_number + ""; if( posts.length == 0 ) { first_id = 1; page_link += " - No images found."; } // Handle empty pages with dummy content. Out of order, but whatever. page_link += "

"; display_post( page_link, first_id + 0.5 ); // +/- on the ID will change with /chrono, once that matters posts.map( link => { fetch( link ).then( r => r.text() ).then( text => { let sublinks = links_from_page( text ); sublinks = sublinks.filter( s => { return s.indexOf( '.jpg' ) > 0 || s.indexOf( '.jpeg' ) > 0 || s.indexOf( '.png' ) > 0 || s.indexOf( '.gif' ) > 0; } ); sublinks = sublinks.filter( tumblr_blacklist_filter ); // Remove avatars and crap sublinks = sublinks.map( image_standardizer ); // Clean up semi-dupes (e.g. same image in different sizes -> same URL) sublinks = sublinks.filter( novelty_filter ); // Global duplicate remover // Oh. Photosets sort of just... work? That might not be reliable; DownThemAll acts like it can't see the iframes on some themes. // Yep, they're there. Gonna be hard to notice if/when they fail. Oh well, "not all images are guaranteed to appear." // Videos will still be weird. (But it does grab their preview thumbnails.) // Get ID from post URL, e.g. http//example.tumblr.com/post/12345/title => 12345 let post_id = parseInt( link.split( '/' )[4] ); // 12345 as a NUMBER, not a string, doofus if( sublinks.length > 0 ) { // If this post has images we're displaying - let this_post = new String; sublinks.map( url => { this_post += ''; this_post += ''; this_post += ''; this_post += 'Permalink '; } ) display_post( this_post, post_id ); } } ) } ) } // Place content on page in descending order according to post ID number // Consider rejiggering the old scrape method to use this. Move to 'universal' section if so. Alter or spin off to link posts instead? // Turns out I never implemented ?chrono or ?reverse, so nevermind that for now. function display_post( content, post_id ) { let this_node = document.createElement( "span" ); this_node.innerHTML = content; this_node.id = post_id // Find lower-numbered node than post_id let target_id = posts_placed.filter( n => n <= post_id ).sort( ).pop(); // Take the highest number less than (or equal to) post_id let target_node = document.getElementById( target_id ); // http://stackoverflow.com/questions/4793604/how-to-do-insert-after-in-javascript-without-using-a-library target_node.parentNode.insertBefore( this_node, target_node ); // Insert our span after the lower-ID node posts_placed.push( post_id ); // Remember that we added this ID // No return value } // Return ascending or descending order depending on "chrono" setting // function post_order_sort( a, b ) // ------------------------------------ Universal page-scraping function (and other helper functions) ------------------------------------ // // Add URLs from a 'blank' page to page_dupe_hash (without just calling soft_scrape_page_promise and ignoring its results) function exclude_content_example( url ) { fetch( url ).then( r => r.text() ).then( text => { let links = links_from_page( text ); links = links.filter( image_standardizer ) links = links.filter( novelty_filter ); } ) // No return value } // Spaghetti to reduce redundancy: given a page's text, return a list of URLs. function links_from_page( html_copy ) { // Cut off the page at the "More you might like" / "Related posts" footer, on themes that have one html_copy = html_copy.split( '="related-posts' ).shift(); let http_array = html_copy.split( /['="']http/ ); // Regex split on anything that looks like a source or href declaration http_array.shift(); // Ditch first element, which is just etc. http_array = http_array.map( s => { // Theoretically parallel .map instead of maybe-linear .forEach or low-level for() loop s = s.split( /['<>"']/ )[0]; // Terminate each element (split on any terminator, take first subelement) s = s.replace( /\\/g, '' ); // Remove escaping backslashes (e.g. http\/\/ -> http//) if( s.indexOf( "%3A%2F%2F" ) > -1 ) { s = decodeURIComponent( s ); } // What is with all the http%3A%2F%2F URLs? return "http" + s; // Oh yeah, add http back in (regex eats it) } ) // http_array now contains an array of strings that should be URLs return http_array; } // Filter: Return false for typical Tumblr nonsense (JS, avatars, RSS, etc.) function tumblr_blacklist_filter( url ) { if( url.indexOf( "/reblog/" ) > 0 || url.indexOf( "/tagged/" ) > 0 || // Might get removed so the script can track and report tag use. Stupid art tags like 'my-draws' or 'art-poop' are a pain to find. url.indexOf( ".tumblr.com/avatar_" ) > 0 || url.indexOf( ".tumblr.com/image/" ) > 0 || url.indexOf( ".tumblr.com/rss" ) > 0 || url.indexOf( "srvcs.tumblr.com" ) > 0 || url.indexOf( "assets.tumblr.com" ) > 0 || url.indexOf( "schema.org" ) > 0 || url.indexOf( ".js" ) > 0 || url.indexOf( ".css" ) > 0 || url.indexOf( "twitter.com/intent" ) > 0 || // Weirdly common now url.indexOf( "ezastumblrscrape" ) > 0 ) // Somehow this script is running on pages being fetched, inserting a link. Okay. Sure. { return false } else { return true } } // Return standard canonical URL for various resizes of Tumblr images - size of _1280, single CDN function image_standardizer( url ) { // Some lower-size images are automatically resized. We'll change the URL to the maximum size just in case, and Tumblr will provide the highest resolution. // Replace all resizes with _1280 versions. Nearly all _1280 URLs resolve to highest-resolution versions now, so we don't need to e.g. handle GIFs separately. url = url.replace( "_540.", "_1280." ).replace( "_500.", "_1280." ).replace( "_400.", "_1280." ).replace( "_250.", "_1280." ).replace( "_100.", "_1280." ); // Standardize the CDN subdomain, to prevent duplicates. All images work from all CDNs, right? if( url.indexOf( ".media.tumblr.com" ) > 0 ) { // For the sake of duplicate removal, whack all 65.tumblr.com, 66, 67, etc., to a single number (presumably a CDN) var url_parts = url.split( '.' ); url_parts[0] = 'https://66'; // http vs. https doesn't actually matter, right? God forbid you fetch() across origins, but you can embed whatever from wherever. url = url_parts.join( '.' ); } // Some URLs have no CDN number, which really screws with duplicate-removal. So does HTTP vs. HTTPS. if( url.indexOf( "//media." ) > 0 ) { url = url.replace( "//media", "//66.media" ).replace( "http", "https" ); } return url; } // Remove duplicates from an array (from an iterable?) - returns de-duped array // Credit to http://stackoverflow.com/questions/9229645/remove-duplicates-from-javascript-array for hash-based string method function remove_duplicates( list ) { let seen = {}; list = list.filter( function( item ) { return seen.hasOwnProperty( item ) ? false : ( seen[ item ] = true ); } ); return list; } // Filter: Return true ONCE for any given string. // Global duplicate remover - return false for items found in page_dupe_hash, otherwise add new items to it and return true // Now also counts instances of each non-unique argument function novelty_filter( url ) { // return page_dupe_hash.hasOwnProperty( url ) ? false : ( page_dupe_hash[ url ] = true ); if( page_dupe_hash.hasOwnProperty( url ) ) { page_dupe_hash[ url ] += 1; return false; } else { page_dupe_hash[ url ] = 1; return true; } } // Given the bare HTML of a Tumblr page, return an array of Promises for image/video/link URLs function soft_scrape_page_promise( html_copy ) { // Linear portion: let http_array = links_from_page( html_copy ); // Split bare HTML into link and image sources http_array.filter( url => url.indexOf( '/tagged/' ) > 0 ).filter( novelty_filter ); // Track tags for statistics, before the blacklist removes them http_array = http_array.filter( tumblr_blacklist_filter ); // Blacklist filter for URLs - typical garbage function is_an_image( url ) { // Whitelist URLs with image file extensions or Tumblr iframe indicators var image_link = false; if( url.indexOf( ".gif" ) > 0 ) { image_link = true; } if( url.indexOf( ".jpg" ) > 0 ) { image_link = true; } if( url.indexOf( ".jpeg" ) > 0 ) { image_link = true; } if( url.indexOf( ".png" ) > 0 ) { image_link = true; } if( url.indexOf( "/photoset_iframe" ) > 0 ) { image_link = true; } if( url.indexOf( ".tumblr.com/video/" ) > 0 ) { image_link = true; } return image_link; } // Separate the images http_array = http_array.map( url => { if( is_an_image( url ) ) { // If it's an image, get rid of any Tumblr variability about resolution or CDNs, to avoid duplicates with nonmatching URLs return image_standardizer( url ); } else { // Else if not an image if( url.indexOf( window.location.host ) > 0 ) { url += "#local" } else { url += "#offsite" } // Mark in-domain vs. out-of-domain URLs. if( options_map.imagesonly ) { return ""; } // ?imagesonly to skip links on ?scrapewholesite return url + "#link"; } } ) .filter( n => { // Remove all empty strings, where "empty" can involve a lot of #gratuitous #tags. if( n.split("#")[0] === "" ) { return false } else { return true } } ); http_array = remove_duplicates( http_array ); // Remove duplicates within the list http_array = http_array.filter( novelty_filter ); // Remove duplicates throughout the page // Should this be skipped on scrapewholesite? Might be slowing things down. // Async portion: // Return promise that resolves to list of URLs, including fetched videos and photoset sub-images return Promise.all( http_array.map( s => { if( s.indexOf( '/photoset_iframe' ) > 0 ) { // If this URL is a photoset, return a promise for an array of URLs return fetch( s ).then( r => r.text() ).then( text => { // Fetch URL, get body text from response var photos = text.split( 'href="' ); // Isolate photoset elements from href= declarations photos.shift(); // Get rid of first element because it's everything before the first "href" photos = photos.map( p => p.split( '"' )[0] + "#photoset" ); // Tag all photoset images as such, just because photos[0] += "#" + s; // Tag first image in set with photoset URL so browse mode can link to it return photos; } ) } else if ( s.indexOf( '.tumblr.com/video/' ) > 0 ) { // Else if this URL is an embedded video, return a Tumblr-standard URL for the bare video file var subdomain = s.split( '/' ); // E.g. https://www.tumblr.com/video/examplename/123456/500/ -> https,,www.tumblr.com,video,examplename,123456,500 var video_post = window.location.protocol + "//" + subdomain[4] + ".tumblr.com/post/" + subdomain[5] + "/"; // e.g. http://examplename.tumblr.com/post/123456/ - note window.location.protocol vs. subdomain[0], maintaining http/https locally return fetch( video_post ).then( r => r.text() ).then( text => { if( text.indexOf( 'og:image' ) > 0 ) { // property="og:image" content="http://67.media.tumblr.com/tumblr_123456_frame1.jpg" --> tumblr_123456_frame1.jpg var video_name = text.split( 'og:image' )[1].split( 'media.tumblr.com' )[1].split( '"' )[0].split( '/' ).pop(); } else if( text.indexOf( 'poster=' ) > 0 ) { // poster='https://31.media.tumblr.com/tumblr_nuzyxqeJNh1rjoppl_frame1.jpg' var video_name = text.split( "poster='" )[1].split( 'media.tumblr.com' )[1].split( "'" )[0].split( '/' ).pop(); // Bandaid solution. Tumblr just sucks. } else { return video_post + '#video'; // Current methods miss the whole page if these splits miss, so fuck it, just return -something.- } // tumblr_abcdef12345_frame1.jpg -> tumblr_abcdef12345.mp4 video_name = "tumblr_" + video_name.split( '_' )[1] + ".mp4#video"; video_name = "https://vt.tumblr.com/" + video_name; // Standard Tumblr-wide video server return video_name; // Should be e.g. https://vt.tumblr.com/tumblr_abcdef12345.mp4 } ) } return Promise.resolve( [s] ); // Else if this URL is singular, return a single element... resolved as a promise for Promise.all, in an array for Array.concat. Whee. } ) ) .then( nested_array => { // Given the Promise.all'd array of resolved URLs and URL-arrays return [].concat.apply( [], nested_array ); // Concatenate array of arrays - apply turns array into comma-separated list, concat turns CSL of arrays into a single array } ) } // Returns a URL with all the options_map options in ?key=value format - optionally allowing changes to options in the returned URL // Valid uses: // options_url() -> all current settings, no changes // options_url( "name", number ) -> ?name=number // options_url( "name", true ) -> ?name // options_url( {name:number} ) -> ?name=number // options_url( {name:number, other:true} ) -> ?name=number?other // Note that simply passing "name" will remove ?name, not add it, because the value will evaluate false. I should probably change this? Eh, { key } without :value causes errors. function options_url( key, value ) { var copy_map = new Object(); for( var i in options_map ) { copy_map[ i ] = options_map[ i ]; } // In any sensible language, this would read "copy_map = object_map." Javascript genuinely does not know how to copy objects. Fuck's sake. if( typeof key === 'string' ) { // the parameters are optional. just calling options_url() will return e.g. example.tumblr.com/archive?ezastumblrscrape?startpage=1 if( !value ) { value = false; } // if there's no value then use false copy_map[ key ] = value; // change this key, so we can e.g. link to example.tumblr.com/archive?ezastumblrscrape?startpage=2 } else if( typeof key === 'object' ) { // If we're passed a hashmap for( var i in key ) { if( ! key[ i ] ) { key[ i ] = false; } // Turn any false evaluation into an explicit boolean - this might not be necessary copy_map[ i ] = key[ i ]; // Press key-object values onto copy_map-object values } } // Construct URL from options var base_site = window.location.href.substring( 0, window.location.href.indexOf( "?" ) ); // should include /archive, but if not, it still works on most pages for( var k in copy_map ) { // JS maps are weird. We're actually setting attributes of a generic object. So map[ "thumbnails" ] is the same as map.thumbnails. if( copy_map[ k ] ) { // Unless the value is False, print a ?key=value pair. base_site += "?" + k; if( copy_map[ k ] !== true ) { base_site += "=" + copy_map[ k ]; } // If the value is boolean True, just print the value as a flag. } } return base_site; }