**** BEGIN LOGGING AT Sun Sep 08 02:59:57 2019 Sep 08 06:11:54 trapt: are you compiling on device or pc? Sep 08 06:12:43 because sdk packages might differ from stock/power packages Sep 08 08:11:22 I read it as on device Sep 08 08:49:24 sixwheeledbeast: wget question: You may be able to use wildcards/globbing if the protocol you are using to transfer files supports wildcards/globbing but HTTP seems to not support globbing but maybe FTP does? What are you trying to download? Can you just download everything then delete the unwanted stuff from your local copy? Sep 08 08:53:01 brolin_empey: I was getting 403 from the site using wget -r. In the end instead of working it out I generated a bash script with all 120 pdf files and ran it. Sep 08 09:02:27 swb: wget only accepts limited wildcards Sep 08 09:02:40 but you can use -r and include/exclude pattern i guess Sep 08 09:03:24 sixwheeledbeast: Some Web sites deny requests if the User-Agent of the client is reported as wget. By default wget truthfully identifies itself as wget but usually the workaround is simply: Sep 08 09:03:25 wget -U 'Mozilla/5.0' Sep 08 09:03:25 which fools the Web site into thinking the request is from a normal, interactive Web browser. Sep 08 09:04:09 I was using Firefox user agent too Sep 08 09:07:05 It was fine to wget each page even without -u but not -r from any page higher Sep 08 09:08:33 sixwheeledbeast: Sometimes the Web site denies the requests because it thinks the requests are automated because wget by default sends requests as fast as it can. wget has command-line options to use in this case that specify a minimum delay between requests and allow randomising or at least varying the delay between requests to make the requests seem like they are from an interactive user. I had to use that more advanced workaround years ago but seem to Sep 08 09:08:34 have not needed it in recent years, maybe because I have mostly no longer been using wget -r in recent years. Sep 08 09:12:06 Oh, wget also has a partially undocumented option to ignore the robots.txt file. Does the Web site from which you are trying to download stuff with wget have a /robots.txt file? Sep 08 09:15:19 brolin_empey: yep I only saw wordpress lines in there tho Sep 08 09:18:38 sixwheeledbeast: OK. Sep 08 09:48:07 you can use -np (no parent) to make it not go up in slashes Sep 08 14:30:20 KotCzarny im compiling on device Sep 08 14:33:10 watch the free space then Sep 08 14:33:53 i believe i should have plenty of free space, the device was just reset before i got it and all ive added is small utils Sep 08 14:34:24 i personally made me sdk chroots that can be ran on any arm device Sep 08 14:34:57 makes life bit easier because 1x400mhz isnt much, when compared to 10usd 4x1.2ghz orange pi Sep 08 14:34:59 :) Sep 08 14:35:17 but only released the one for n900 Sep 08 14:35:36 still, if you know linux and a bit of debian, you can create one yourself Sep 08 14:35:50 (better than cluttering rootfs) Sep 08 14:40:13 (or .... just use scratchbox :) Sep 08 14:45:26 i havent been able to find a working dl of scratchbox if anyone has a link Sep 08 14:47:00 well, last time I followed wiki instructions to install it on a fresh debian, but ... Sep 08 14:47:10 (wheezy iirc) Sep 08 16:09:56 ~scratchbox Sep 08 16:09:58 from memory, scratchbox is a cross-compiling system that uses binfmt_misc, rpc calls, and an nfs mount to make a cross-build appear to be 100% native, and is found at http://www.scratchbox.org/, hosted by maemo now. Also at http://maemo.merlin1991.at/files/SB Sep 08 16:10:56 thank you, not sure what i was looking at last night that i didnt find this Sep 08 16:11:20 and install instructions are at wiki.maemo.org Sep 08 16:37:14 ~sb Sep 08 16:37:15 [scratchbox] a cross-compiling system that uses binfmt_misc, rpc calls, and an nfs mount to make a cross-build appear to be 100% native, and is found at http://www.scratchbox.org/, hosted by maemo now. Also at http://maemo.merlin1991.at/files/SB Sep 08 16:37:41 hmm same link **** ENDING LOGGING AT Mon Sep 09 02:59:58 2019