**** BEGIN LOGGING AT Tue Jan 18 02:59:56 2022 Jan 18 10:09:51 marex, I thought it was None, yeah Jan 18 10:09:59 marex, we could insert guards in AUH instead Jan 18 13:19:26 kanavin: do you mind if I leave that to you, since I'm not that deep in python ? Jan 18 14:33:23 oh look, ffmpeg 5 was released and it is supposed to be some LTS release too, maybe it would make sense to put it into oe-core for the upcoming lts release too Jan 18 17:50:54 khem: did python3-pyruvate fail on qemuppc as well? (I expect it would for the same reason) Jan 18 18:08:51 khem: maybe qemuppc has a 64-bit pointer? Jan 18 18:09:31 https://doc.rust-lang.org/std/sync/atomic/#portability Jan 18 18:42:54 RP: this seems hanging (same debian9-ty-2 issue), should I try to look into it, and where to start? https://autobuilder.yoctoproject.org/typhoon/#/builders/80/builds/3036 Jan 18 18:55:11 khem: never mind, I saw your email. Same fix as librsvg Jan 18 21:34:43 kanavin: I sent you the auh fix attempt Jan 18 21:35:07 marex, does it work for you? Jan 18 21:35:22 the patch looks fine Jan 18 21:35:35 kanavin: CI is not complaining all right Jan 18 21:36:35 marex, I merged it Jan 18 21:37:39 kanavin: you have too much trust in my ability to copy from stackoverflow and paste elsewhere, oh well ... Jan 18 21:38:27 marex, we'll find out on sunday ;) Jan 18 21:42:43 kanavin: all right, lemme re-run the CI with HEAD Jan 18 23:25:20 kanavin: I guess we're assuming it is the hardware and letting michael look tomorrow? Jan 18 23:35:42 RP: Btw, I mentioned earlier that I had troubles due to setscene tasks failing and it was due to sstate tarballs being empty files. I have figured out now why it is happening and it is in fact due to the upgrade from Honister 3.4 to 3.4.1, specifically commit 10e300e6. Jan 18 23:36:56 That commit may seem benign, removing the -w test and instead always touching the file and ignoring any errors. I know I was involved in the review of it, and agreed with the change. Jan 18 23:38:03 However, when you combine it with the fact that we have a job that prunes the global sstate cache of old tar balls, things will go wrong. Jan 18 23:40:57 What happens is that the local sstate cache has a symbolic link to the global (nfs mounted) sstate cache. Then the tar ball is removed from the global sstate cache. Now the local link points into the void. Now a build is run and needs that sstate tar ball, finds the link and since -w is no longer used on it, it just touches the link which will create the empty "tar ball" in the global sstate cache. And after that the next build will fail because Jan 18 23:40:57 bitbake cannot unpack the empty tar ball. :( Jan 18 23:41:18 Saur[m]: ah, the symlink was the piece I was missing Jan 18 23:41:35 Saur[m]: I guess that is something we could fix and we should have a test case for Jan 18 23:44:35 Yeah, I started looking into what happens in this situation and I also realized that even in the case where I have a local build that doesn't have write access to the global sstate cache, a local sstate cache link that points to a no longer existing global sstate cache file, will prevent the local link from being replaced by an actual file if the sstate is needed. This is because the creating code only checks for its existence, not whether it Jan 18 23:44:36 actually has any contents. Jan 18 23:46:12 Btw, is there any way to tell bitbake to not create symbolic links from the local to the global sstate cache but rather copy the files instead? Jan 18 23:47:33 It seems it would be more efficient (time wise) to copy the files once rather than having them copied over and over again over NFS. Jan 18 23:50:01 Saur[m]: at least on the autobuilder it isn't more efficient and no, I don't think we added yet more codepaths to complicate things Jan 18 23:50:57 :) **** ENDING LOGGING AT Wed Jan 19 02:59:57 2022