**** BEGIN LOGGING AT Fri May 14 02:59:56 2021 May 14 09:33:00 is there any reason why wic would have a different owner for a copied folder? e.g. wic line "part --source rootfs --rootfs-dir=${IMAGE_ROOTFS}/data" has owner root in rootfs but another owner in the partition May 14 10:08:57 just to show what I mean, https://gist.github.com/alex88/542446aad9d756098f0193815c965376 the /data/openvpn folder is created because /data is the only writable dir May 14 10:11:55 hm, i'm sure I saw a function in oe-core (maybe in devtool?) to turn a layer name into a base path. May 14 10:16:00 rburton, is that related to my question? May 14 10:16:07 No May 14 10:16:57 oh ok :) May 14 10:32:06 hey there, complete noob to yocto here. I've built an os for a 3d printer we're going to sell using ansible, and i'm migrating to mender/yocto, i've just read some of the docs for the past few hours. I'm building for rpi4, what would be the fastest way to build ? May 14 10:32:38 i was going to set a pi4 with ssd at home for the builds but i'm hesitating with a vps or amazon but i don't like to use aws May 14 10:33:55 any advice? May 14 10:34:43 will the build actually be faster if it's built on the target architecture or will it be faster on a more powerful machine of the same architecture ? May 14 10:35:54 my workstation is 86_64 but i'm building for 32bit so armhfp as per uname May 14 10:42:29 peac, I think using your x86_64 cpu will be faster, I've ran some tests on a 96core aws instance and it was indeed much faster than my workstatino May 14 10:42:56 the rpi cpu just isn't fast enough to compete with you workstation cpu even when cross compiling May 14 10:43:48 alex88: thanks for your feedback! i've tested emulating arm on 86 and it was truly awful, but i did not know for cross compilation May 14 10:44:34 did you try qemuarm? it wasn't *that* bad in my case but my OS is extremely simple atm so maybe that's the reason I didn't notice a huge difference May 14 10:45:33 it was in virt-manager using raspios so i guess yes under the hood May 14 10:45:48 it was unusable in my case with the gui May 14 10:46:42 haven't tested the gui at all, but I see why it might have been struggling May 14 10:46:47 while running arm VMs on raspi works perfectly fine, i can run 2, 3 VMs or raspios on raspios without usability impact May 14 10:47:24 s/or/of/ May 14 10:49:20 alex88: what was your build time on aws? May 14 10:49:34 btw usually only the first build should take a long time, after that things are quite smooth (unless you want to install npm global packages, those are painfully slow for some reason) May 14 10:49:40 peac, from scratch about 15 mins May 14 10:49:50 but remember my os is minimal May 14 10:49:55 ok so this is really cpu intensive May 14 10:50:24 yeah except the npm package which spends 99% of the time in do_configure and do_package everything else is mostly compilation time May 14 10:51:08 would using a raspi cluster help ? May 14 10:51:33 I haven't looked at distributed building it (if it's even possible) May 14 10:51:41 I know there's something called toaster to do remote builds May 14 10:52:18 but I'm not sure it splits the work between nodes May 14 10:56:14 i used to do it with gcc so i guess it must be configurable May 14 11:33:26 toaster is just a web frontend to bitbake May 14 11:34:39 peac: yocto is always cross, so the x86-64 workstation will be a *lot* faster than a rpi May 14 11:36:42 there is zero difference in build time when the only change is the target architecture: the build time difference between x86 and arm on the same hardware is statistically insignificant May 14 11:50:00 rburton: thank you for confirming that ! May 14 13:42:43 I'm officially doing to many things at once. Two errors in my workflow last night alone. May 14 13:42:45 gah. May 14 14:11:44 zeddii: happens to best of us.. May 14 14:51:23 Hello, all. Is there any tool visual tool for browsing through what recipes would be "included" by some target? May 14 14:51:24 I've started working o a small parser, but, I am pretty sure that I am doing something that is probably already done May 14 14:54:59 zeddii: will send RFC for that dev86 soon, but wasn't able to reproduce it here locally, not sure what would be the difference on your host May 14 14:58:16 sounds good. And me either! May 14 15:02:36 zeddii: can you please test https://github.com/shr-project/meta-virtualization/commit/8dbd22d1efb2752ccd4b3901b0167a6f9c49397f ? May 14 15:02:50 or do you prefer it as a RFC patch on ML? May 14 15:03:02 dang, wrong link, sec May 14 15:03:30 I can grab it from somewhere like that. so that's fine,. May 14 15:04:15 https://github.com/shr-project/meta-virtualization/commit/8808ec369a01eba91345ea0b8bc7fb1b9b888b1c May 14 15:06:10 testing now May 14 15:07:34 ha. cpp blew up. I'll pastebin the result (I know these suck when you can't just trigger the error locally). May 14 15:08:36 JaMa: https://pastebin.com/NzZrB3xN .. it was on install this time. I'll look as well. May 14 15:10:23 this rings a bell, this is related to what gperf-native generates, I had a fix locally before noticing that newer repo which already had support for gperf-3, but why it fails for you now is a bit mystery May 14 15:13:06 zeddii: as in https://github.com/jbruchon/dev86/pull/19/commits/f3e0666133156d8da29fdef57d23db2ab491be09 let me touch some .toks to see if I can reproduce it with current gperf-native May 14 15:13:52 ahah. yah. definitely weird. I'm normally on the other side of this, trying to figure out why something has broken on builder i can't access :D May 14 15:14:22 I just did a fully clean build, and it happened again. so it wasn't some half baked, reused object that got me. May 14 15:14:50 if I knew what can of worms dev86 is, then I would pretend that I haven't seen that "gperf not found" line in the bitbake world log :) May 14 15:15:22 easy to miss one line in 300+ MB text file.. May 14 15:15:35 that i can relate to as well. I appreciate the update, because that older, rotting thing would have sat there for who knows how long. May 14 15:19:40 isn't this a bit strange? 2021-05-14 12:04:59 (10.9 MB/s) - ?/mnt/mirror-write/jansa/downloads/go1.16.4.src.tar.gz? saved [20917203/20917203] May 14 15:19:43 ERROR: Fetcher failure for URL: 'https://golang.org/dl/go1.16.4.src.tar.gz;name=main'. Unable to fetch URL from any source. May 14 15:20:01 why would it first write the archive and then fail with Unable to fetch? May 14 15:21:36 the file does exist on this NFS mount, it's readable and sha256sum matches with go-1.16.4.inc:SRC_URI[main.sha256sum] = "ae4f6b6e2a1677d31817984655a762074b5356da50fb58722b99104870d43503" May 14 15:27:11 zeddii: hmm https://github.com/jbruchon/dev86/pull/19/commits/f3e0666133156d8da29fdef57d23db2ab491be09 was merged to master and then it disappeared from master again, damn those force pushes I guess.. will send fix shortly May 14 15:28:39 * zeddii whistles innocently May 14 15:46:12 please add https://github.com/shr-project/meta-virtualization/commit/a7dbdc3d144073b693092be49d47a55250607d9c + https://github.com/shr-project/meta-virtualization/commit/a7dbdc3d144073b693092be49d47a55250607d9c if it works for you with these and if you agree, then we can squash these last 2 to save few bytes.. :) May 14 15:46:53 and again wrong link, give me few mins to drink some coffee before I paste something else May 14 15:50:51 no worries. May 14 16:00:14 coffee (/), build native (/), qemux86 (/), qemux86-64 (/), top 3 commits in https://github.com/shr-project/meta-virtualization/commits/jansa/master should be worth testing May 14 16:05:00 ack'd trying now. May 14 16:07:03 JaMa: that fixed it up on the broken builder. May 14 16:07:50 I can just push them, if you weren't going to tweak the commit messages, or can just wait for a resend. But the issue is gone May 14 16:08:09 feel free to push them May 14 16:11:04 done. and many thanks. May 14 16:11:29 thanks, lets see if I get green build now :) May 14 16:11:54 hah. I was trying not to think that ;) fix mine, break yours. That's usually the way. May 14 16:17:08 anyone other than RP familiar with runqueue / unreachable sstate for quick sanity check? May 14 16:36:41 JaMa: I am kind of here atm... :) May 14 16:40:18 RP: I didn't want to disrupt you, but as you're here, can you please look at the end of commit message in https://git.openembedded.org/openembedded-core-contrib/commit/?h=jansa/master&id=6f93dce1289d3b70edea2ff7947eca98187f6993 if it makes any sense to you? May 14 16:41:22 when I'm trying to reproduce it manually, then the "unreachable" removal does the right thing and removes at-spi2-core-native/2.40.0-r0 _before_ building new 2.40.1 May 14 16:44:29 it would be a bit safer if the manifest filename contains the version like the stamp file, then at least it could detect that there is a manifest from different version as well, so it's not safe to cleanup anymore (but I think we would still need to prevent this situation to occur anyway) May 14 16:46:06 and I've seen this behavior also with dunfell today, so whatever is cauting this behavior is there for a while May 14 16:49:03 JaMa: Are there two different machines here? May 14 16:50:41 yes May 14 16:52:31 JaMa: The "reachable" code maintains a list of stamps per machine. I think you're right that one is breaking the other as the manifest file names don't include a version May 14 16:53:32 JaMa: when you tried to reproduce, did you switch machines and see if that other machine then broke things? May 14 16:54:11 JaMa: I can see a definite bug there May 14 16:56:40 I did, but will try again (because I've switched between at-spi2-core upgrade and building adwaita-icon-theme), maybe I should have switched multiple times while building at-spi2-core-native (to build older with M1, then newer with M2, then switch back to M1) May 14 17:03:44 RP: https://paste.ubuntu.com/p/qNWHnxFPXX/ reproduces this issue May 14 17:04:44 JaMa: what is odd here is that looking at that code, if looks through the index-x86_64 and finds a 2.40.0 entry. That entry should be gone as soon as 2.40.1 builds :/ May 14 17:07:02 or here with the output https://paste.ubuntu.com/p/Jg7zc7PgcR/ May 14 17:10:35 I've added a bit more debug output between the runs, lets see May 14 17:10:37 JaMa: I tried that and it worked for me, not quite sure what I'm not doing :/ May 14 17:11:11 JaMa: oh, no rm_work May 14 17:12:08 and I forgot to reset PR between runs, so didn't work for me as well :) May 14 17:12:57 JaMa: still not reproducing here :/ May 14 17:14:29 strange, I've latest (as of today) bitbake master+oe-core only so it should be close to poky I assume you were using, let me test it a bit more now when I've relatively quick and simple reproducer May 14 17:15:09 JaMa: you're on master? May 14 17:15:16 yes May 14 17:17:12 JaMa: oh, I see. The reproducer doesn't error, it just shows the manifest is gone May 14 17:17:28 right, sorry should have mention it May 14 17:18:22 JaMa: its fine, I was just looking for the manifest error :) May 14 17:18:55 JaMa: presumably if you build something that uses zlib-native it would now error May 14 17:19:19 that's what adwaita build was for before, but I've dropped it as missing manifest and zlib-native files are bad enough already May 14 17:20:31 JaMa: it is definitely a bug :) Not quite sure what is going wrong but having a reproducer helps a lot! May 14 17:23:07 JaMa: if you stop before the third bitbake invocation, you can see two zlib-native entries in sstate-control/index-x86_64 May 14 17:23:26 JaMa: that is the problem and it all goes wrong from there May 14 17:23:39 JaMa: one is the right version, one isn't May 14 17:25:11 RP: agreed, but to prevent the 2nd one to be created, the "unreachable" cleanup would need to check all stamps of MACHINEs which were built in the TMPDIR, right? May 14 17:25:30 so that the 2nd build would remove the r0 build before building r1 May 14 17:26:00 JaMa: when the new entry is added to the file, it needs to clean up the old one May 14 17:26:13 updated reproducer https://paste.ubuntu.com/p/8jC5SqJZHz/ and output https://paste.ubuntu.com/p/n6SDsg4Bsy/ May 14 17:26:17 JaMa: the code in sstate_install that adds this is clearly not right :/ May 14 17:26:30 OK, I see May 14 17:29:35 JaMa: as far as I can tell there is no code that removes an entry from that file May 14 17:29:43 which seems strange May 14 17:32:16 but it should be removed from the index by sstate_eventhandler_reachablestamps already, right? not by sstate_install May 14 17:33:49 JaMa: right, that is the remove code I was thinking of May 14 17:38:38 but it still feels wrong that the "shared" native stamps are referenced from machine specific index-machine- files, once the native recipe is rebuilt, then only one MACHINE will reference the correct stamp, while all others will be incorrect May 14 17:39:11 but I believe this had good reasons to be designed this way, just looking at it from this bug perspective it seems strange May 14 17:39:45 JaMa: I have a fix in mind, just testing but changing the file on the wrong machine :/ May 14 17:40:47 great :) May 14 17:41:33 will be on call for next hour(s), will test after that May 14 17:45:38 first yocto build does not boot, how to troubleshoot? May 14 17:46:43 i've manually done what i was going to automate with this docker file https://dpaste.com/9VL7VBMRR May 14 17:47:52 i used dd to copy the image: `sudo dd if=~/documents/mender-raspberrypi/build/tmp/deploy/images/raspberrypi4/core-image-base-raspberrypi4.sdimg of=/dev/sdb bs=1M && sudo sync` May 14 17:55:20 do i need to clone poky by itself before cloning the raspberry community project? May 14 17:59:05 i've been following this https://hub.mender.io/t/raspberry-pi-4-model-b/889 May 14 18:07:32 JaMa: http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/t222&id=c7e434b86644d88c5943307ac836f00c47d12c02 is my potential fix for it May 14 18:07:57 JaMa: I'm out of time now, I'll try and write up some proper code comments in due course. It does fix the issue locally for me May 14 18:31:52 RP: yes, I cannot reproduce the issue anymore with your fix as well, will include it in many builds over the weekend dunfell-honister and let you know if it fails somewhere May 14 19:14:18 JaMa: sounds good, thanks. I'll get the commit cleaned up and sent out in due course May 14 19:36:10 need set priorities for process, can someone shed some light on https://stackoverflow.com/questions/67539387/linux-scheduler-priority-of-1-does-not-exist-why **** ENDING LOGGING AT Sat May 15 02:59:58 2021