**** BEGIN LOGGING AT Mon May 10 02:59:56 2021 May 10 08:02:17 kanavin: smp issue: https://autobuilder.yoctoproject.org/typhoon/#/builders/42/builds/3408/steps/13/logs/stdio - not sure if this is an improvement or not :/ May 10 08:02:43 https://autobuilder.yoctoproject.org/typhoon/#/builders/87/builds/2120/steps/14/logs/stdio May 10 08:11:32 hmm, qemuppc serial still causing irq tracebacks :( May 10 08:25:01 good morning May 10 08:41:17 RP: yeah, I then remembered libepoxy was not in my pile :) May 10 08:51:18 I had a recipe with do_compile() { make FOO=1 } and do_compile_machine2() { make FOO=2 }, and they worked "standalone", but when building both machines at the same time they could get a prebuilt package from sstate cache of the other variant. is that expected? May 10 08:51:32 on Dunfell May 10 09:21:15 ernstp: did the two commands give identical output now or in the past? May 10 09:21:47 RP: nope May 10 09:23:34 the recipes are clearly parsed correctly, I can check with bitbake -e etc. So I'm guessing it's a caching issue. I have to "force" them with PACKAGE_ARCH = "${MACHINE_ARCH}" May 10 09:27:16 is it just picking the RPM from tmp/deploy/rpm/cortexa7t2hf_neon_mx7d/ even though the hash is different? May 10 09:27:23 ernstp: oh, you would have to mark them as machine specific. How are you building both machines at the same time? multiconfig? May 10 09:27:50 RP: no, just two bitbakes in a row May 10 09:28:46 ernstp: I'd expect it to have re-run the task but would your makefile correctly clean up the state from the other? May 10 09:29:05 ernstp: marking as machine specific is the right thing to do here May 10 09:31:20 RP: I was surprised by this anyway May 10 09:41:41 ernstp: Whilst it is "obvious" when you look at those two compile commands, it is hard for bitbake to know that it would need to do something special there. May 10 09:42:12 RP: I thought it would use the taskhash thingy May 10 09:42:37 because the recipe parses differently May 10 09:42:52 ernstp: A build of machine=a knows nothing about machine=b :/ May 10 09:43:07 ernstp: Also, tasks are meant to be idempotent, when you run them, they should rerun cleanly without impact from previous runs May 10 09:43:16 The system is designed with that assumption May 10 09:43:20 but everything in sstate-cache is indexed by hash right? May 10 09:43:31 ernstp: it is, yes May 10 09:44:36 ernstp: I'm just guessing but I suspect that the first build with FOO=1 writes out things in do_compile. You then change machine and FOO=2 writes out more things but doesn't clean up what FOO=1 did. The installed result "leaks" data from FOO=1 into the second build May 10 09:45:05 ernstp: sstate itself is very unlikely to have done this, corruption in the tasks is usually how something like this would happen May 10 09:45:20 RP: i'm pretty sure that's not the issue here... but let me double check May 10 09:47:50 RP: can it pick a package from tmp/deploy/rpm/cortexa7t2hf_neon/ without checking the hash? May 10 09:48:14 ernstp: no May 10 09:48:21 ok May 10 09:50:10 ernstp: you should be able to test by doing MACHINE=A bitbake X -c install; ; MACHINE=B bitbake X -c install; May 10 09:50:35 running install will bypass sstate entirely and force the tasks to run as install isn't an sstate task May 10 10:03:12 RP: none of the meta-arm branches need meta-kernel now May 10 10:03:38 rburton: ah, cool. I think I have a pending patch I need to restart the controller and get in for that? May 10 10:04:03 yeah i did fiddle the AB somewhere to stop it fetching, so maybe you're missing an update May 10 10:04:51 rburton: yes, we've not updated it :/ May 10 10:37:58 RP: so we run do_compile { machine specific }, but do_configure is _not_ machine specific, so it uses sstate cache and hence clean is never run May 10 10:39:08 RP: so yeah, it comes down to a buggy makefile I guess May 10 10:41:01 ernstp: it doesn't use the sstate cache as such. It just doesn't run clean and the two builds corrupt sstate May 10 10:41:24 ernstp: we do have logic at the configure level to try and clean up old builds. Not at compile though May 10 10:41:30 and the configure stuff is fragile May 10 10:52:29 are prefuncs not run when doing a bitbake -C task recipe? May 10 10:52:58 qschulz: should be May 10 10:53:02 doing a bitbake -C fetch recipe and it seems clean_recipe_sysroot is not run since clean_recipe_sysroot[cleandirs] += "${RECIPE_SYSROOT} ${RECIPE_SYSROOT_NATIVE}" is not executed? May 10 10:53:07 (reminded: using thud) May 10 10:53:11 reminder* May 10 10:55:19 RP: found the issue May 10 10:55:22 you won't like it :p May 10 10:55:42 MULTILIB \o/ \o/ \o/ May 10 10:55:44 * RP ponders heading afk May 10 10:56:24 there's a missing ${WORKDIR}/${ML_PREFIX}-recipe-sysroot in the cleandirs May 10 10:56:35 qschulz: we had issues with that code and dropped it, it isn't in master May 10 10:57:07 The struggle of running on EOL branches :/ May 10 10:57:20 qschulz: there are a ton of other subtle race issues around that May 10 10:57:25 RP: thanks, I guess nothing to fix on master wrt that then :) May 10 10:57:47 qschulz: correct, although there remain issues in there with cleaning up fetches :/ May 10 10:58:27 RP: I think I remember the discussion, wasn't that the starting point of a discussion around making a ${WORKDIR}/local-files or something? May 10 10:59:09 so that anything in SRC_URI that isn't being put in a sbudir makes it to that ${WORKDIR}/local-files so we can remove it and have a clean slate? May 10 10:59:46 qschulz: correct May 10 11:00:13 * RP could never get it to work right May 10 11:05:11 RP: here it looks like it "uses" configure from sstate, hence not running clean https://paste.ubuntu.com/p/vCNz8Z4nyn/ May 10 11:05:36 RP: may be a stupid suggestion but.... what about having ${WORKDIR}/src for any file in SRC_URI and anything that needs to be unpacked/git clone, whatever, is actually put in a subdir of ${WORKDIR}/src? May 10 11:06:00 such that ${S} would be ${WORKDIR}/src/${PN}-${PV} by default? May 10 11:06:02 RP: anyway, I'm going to add do_configure { make clean FOO=1 } also, should help bb detect new hash May 10 11:07:03 so that the path to local unpacked files is still the parent of the path to ${S} May 10 11:19:43 ernstp: that is the "configure cleanup" code I mentioned. It is saying that the configure task checksum hasn't changed so it didn't run that May 10 11:19:52 ernstp: configure tasks are not stored in sstate May 10 11:20:20 ernstp: this is what I meant about it not "seeing" compile task changes May 10 11:21:14 qschulz: it would mean longer patches and people already complain about the path depth. It also means changing most settings of S in recipes May 10 11:21:27 qschulz: I think my last proposal was to do this and add a symlink now I think about it May 10 11:23:08 A symlink? May 10 11:23:14 from where to where? May 10 11:23:22 RP: right, not stored, but not run. got it May 10 11:24:54 mmm might check in the ML archives of some OE-Core ML I guess May 10 11:26:03 Somewhat related question... how does EXTRA_OEMAKE get into my taskhash, if my do configure is just "oe_runmake clean"... May 10 11:28:10 qschulz: ${S} to the src/ May 10 11:28:54 ernstp: how is oe_runmake defined? May 10 11:28:55 ernstp: https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/base.bbclass#n65 and https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/base.bbclass#n61 May 10 11:34:25 yeah, but the yocto task "do_configure" only looks like "oe_runmake clean", it's not expanded May 10 11:45:37 it seems to pick up the dependency anyway! I guess it recursively expands the function somewhere anyway May 10 12:02:25 ernstp: the variables are recursed, yes May 10 12:05:35 RP: (though this is not a variable, it's a bb function) May 10 12:06:40 ernstp: right, we parse shell and python functions too May 10 12:27:22 thanks for the support May 10 15:03:27 12 1/2 hours.. not bad May 10 15:03:40 oops May 10 16:32:55 RP: 👍 for the mail May 10 16:34:41 qschulz: hopefully gives people something to point at about the kinds of problems we're facing :) May 10 16:36:47 RP: copy pasting this in my company's main channel ;) I think our struggle is that by the time we have upgraded to a newer release of Yocto, it's almost already EOL. Which means anyway the bugs to be fixed are unlikely to still be in master or easily reproducible without upgrading fully, againn (maybe I'm just finding excuses) May 10 16:38:14 RP: sadly, that is also an issue that the Linux kernel is seeing too (in some other form I guess). There was a discussion on the LKML a few weeks ago involving gkh and some other long time contributors May 10 16:40:39 RP: to be explicit, I don't blame Yocto to move too fast, I blame ourselves to move too slowly, which is bad for us and does not allow us to really contribute back May 10 16:41:54 qschulz: if you can get to dunfell, things should improve a bit May 10 16:42:12 This is partly why we're trying to get the LTS working May 10 16:43:01 RP: by the time we move to dunfell, the next LTS will be out and we'll again be lagging behind since Dunfell is, as of today, if I understood correctly, not really planned to be supported past a few months after the 3.5 (?) LTS release May 10 16:43:13 RP: but yes, the LTS IMO is a very good move May 10 16:43:34 qschulz: it is still possible we could extend dunfell, we'll see May 10 16:44:45 RP: I know, but I wouldn't want to advocate the move to Dunfell for my company since last time it took ages to migrate from krogoth to thud knowing there's a possibility it will be maintained for a few months only May 10 16:45:15 It'd be great for the project that it gets funding/support for LTS for longer than 2 years. Fingers crossed! May 10 16:48:01 I also realize how hypocrite it is to say that and not participate much, here's to the hope of improving that in mid/long term May 10 16:50:15 qschulz: a couple of new gold members would go a long way to sorting out LTS for another two years May 10 16:59:28 I still haven't figured out how to even mention the idea to the higher ups that my company should join YP May 10 17:00:17 It's also hard because the portion of the company that uses Yocto is small, so the pricing based on company size is actually a deterrent May 10 17:07:20 khem: you pointed me to this example of extending the yocto linux metadata a few days back: https://github.com/akuster/meta-odroid/tree/master/recipes-kernel/linux/linux-yocto-dev/odroid-kmeta May 10 17:07:50 question: is the "odroid-kmeta" subdirection required? if so, is it required to be named specially? May 10 17:08:00 subdirectory May 10 17:11:34 i.e., does it have to be named -kmeta, where is the cpu architecture? May 10 17:16:43 yates. the name doesn't matter, it is the 'type=kmeta' that you see in that SRC_URI that triggers it as a kernel configuration repo. May 10 17:39:45 hello ! how do I export a variable from on task ie from: do_configure , to , do_install? I'v tried using 'export MYVAR="myvalue"' for some reason, this does not work. May 10 17:46:52 yates: its just a name, make sure to use it in SRC_URI corrrectly e.g. see SRC_URI_append = " file://odroid-kmeta;type=kmeta;name=odroid-kmeta;destsuffix=odroid-kmeta" May 10 17:47:13 you can then name it anything as long as you inform the tooling about the name consistently May 10 17:53:03 aleblanc: what is your use case for exporting a variable from one task to another? May 10 17:55:40 qschulz, I parse the code fetched, extract some information and then, pass it to the install setup to determine correct path. The information changes with each version of the code, (fetch using git) and cannot be deterministic. May 10 18:00:45 qschulz, I could re-run the code to extract the information in do_install, but looking for a more elegant way to achieve that... May 10 18:05:29 aleblanc: why don't you make your actual SW **configure** (in the sources) what's needed for the compile AND install targets May 10 18:05:33 ? May 10 18:06:26 Me being naive, I think this is a more proper and simple approach to handling this information instead of parsing the code from Yocto May 10 18:07:03 qschulz, the SW is a complex legacy java app that I can't realy modify... I have to work around it. May 10 18:11:52 * qschulz shrugs May 10 18:12:27 easy way IMO is to write your information parsed during configure task into a file in ${B} and read that file from install task May 10 18:14:38 qschulz, maybe May 10 18:14:41 tasks each have their own scopes that merges/superseeds recipe global scope. You cannot therefore pass variable from one task to another. You cannot modify the recipe gloabl scope inside tasks or functions (only the python anonymous function is allowed, which runs at parsing time, well before do_fetch is even run) May 10 18:16:43 qschulz, ah ! ok that what I thinking ,there must some some sort of scope.... is setVar/getVar affected by that as well ? May 10 18:20:46 aleblanc: yes May 10 18:21:34 the only function that can use setVar/getVar and have the correct behavior is the python anonymous function May 10 18:22:25 that might also apply to ${@some_python_func_or_instruction} but I'm not too sure to be honest May 10 18:26:28 qschulz, hum ok , thanks for the info May 10 18:30:27 I see both = and += used for DEPENDS. Is there a rule of thumb on when to use which? May 10 18:30:49 gpanders: use += May 10 18:30:57 it's much safer May 10 18:31:34 thanks, that is what I've been using as I figured it is probably safer. A quick grep through a bunch of my layers though showed twice as many usages of = than +=, so I was wondering if there was something I was missing May 10 18:32:04 (by "my layers" I mean layers used in my project, not layers I wrote myself) May 10 18:32:05 one simple example is when you have a recipe that inherits a class whcih sets DEPENDS to some value, and right after the inherit someclass in your recipe, you do a DEPENDS =, you completely override what was set in someclass May 10 18:32:42 gpanders: I think the convention now (at least in OE-Core, Yocto Project layers??) is to use DEPENDS_prepend/DEPENDS_append in classes May 10 18:32:59 so that it's safe for recipes to use = or += May 10 18:33:16 e.g. https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/cmake.bbclass#n4 May 10 18:33:32 https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/autotools.bbclass#n22 May 10 18:33:34 etc... May 10 18:36:02 considering that usually third party layers aren't exactly following best practices (which are sometimes implicit), I'd personally go for DEPENDS += :) May 10 18:55:31 thanks qschulz, I'll do that May 10 19:46:31 zeddii/khem: thanks for the hints. very useful. May 10 20:34:54 if you look in the linux kernel srctree under the "arch" subdirectory you see names like "alpha", "m68k", "riscv", etc. is this what the KMACHINE variable is for, to map a machine name to the kernel architecture name for the machine? May 10 20:37:07 is this where yocto finds a defconfig, e.g., linux/arch/riscv/configs/defconfig" May 10 20:37:27 ERROR: linux-yocto-5.8.13+gitAUTOINC+b976de4f41_3c5d210805-r0 do_kernel_metadata: Could not locate BSP definition for qemucsky/standard and no defconfig was provided May 10 20:39:08 nope. they have nothing to do with the source tree. The just allow us to map the OE variable "MACHINE" to something else in the kernel metadata. May 10 20:39:25 the default is that KMACHINE == MACHINE May 10 20:40:19 zeddii: so where is the "defconfig" that the error above to be found? May 10 20:40:34 there isn't always one. May 10 20:40:39 depends on your kernel source tree. May 10 20:41:14 if it is organized like the major arches, it will found at: arch/${ARCH}/configs/${KBUILD_DEFCONFIG} May 10 20:41:38 but that's only if you set KBUILD_DEFCONFIG May 10 20:42:01 if you haven't set it, then you should just have 'defconfig' in your SRC_URI provided with file:// ... as you would anything, May 10 20:42:49 zeddii: i am setting up for a rather new architecture: csky. there is no kernel metadata for this architecture yet May 10 20:43:06 at least not that i've seen. i'm targeting 5.8.13 May 10 20:43:36 so i'm trying to "graph" some in via my own custom meta-csky layer May 10 20:44:20 but i've noticed that there is a linux/arch/csky/configs/defconfig in the kernel srctree May 10 20:44:38 do you need to provide the defconfig in the SRC_URI, or you'd need a full meta-data description that describes the board (and provide it like the type=kmeta example) May 10 20:44:51 to even have it go look there, you need to set the variable I mentioned May 10 20:45:08 KBUILD_DEFCONFIG May 10 20:45:28 it's not going to start searching around in the kernel source tree without being explicitly told to do so via that variable. May 10 20:46:37 should set KBUILD_DEFCONFIG in the recipes-kernel/linux/.bbappend file? May 10 20:46:47 should i set... May 10 20:47:09 yup. if that's what you want to use. May 10 20:47:30 wait.. May 10 20:48:02 if i set that variable, does that mean yocto stops using the yocto kernel metadata? May 10 20:48:30 it means it uses that for the base configuration. you can still provide .cfg and .scc files to be applied on top of it. May 10 20:48:59 ahh, ok May 10 20:50:29 where is the actual kernel selected for the "yocto linux" kernel placed in the builddir tmp folder? May 10 20:50:36 i have looked and can't find it May 10 20:51:07 you see, i'm afraid the version of 5.8.13 that was pulled in did NOT have the csky architecture May 10 20:52:37 here are all the git downloads i see: http://paste.ubuntu.com/p/9sqHSxj7pP/ May 10 20:53:57 the csky arch was added fairly recently, i believe, like late last year May 10 20:54:00 that depends on your kernel recipe. whatever you've pointed at the for the repo and SRCREV May 10 20:55:00 i don't have any SRCREV.. May 10 20:55:40 here is my kernel recipe .bbappend currently: http://paste.ubuntu.com/p/fmqgXvFTXk/ May 10 20:57:01 what is that bbappending ? linux-yocto-dev ? That's a bleeding edge kernel with AUTOREV May 10 20:57:01 i guess i thought that would be picked up by the oe-core May 10 20:57:12 yes May 10 20:57:24 so LINUX_VERSION is ignored? May 10 20:57:50 well, no. linux-yocto_5.8.bbappend May 10 20:58:26 LINUX_VERSION is used for the PACKAGE version, but the SRCREV still defines what is built. May 10 21:02:05 then my .bbappend would be picking up the SRCREV in meta/recipes-kernel/linux/linux-yocto_5.8.bb, right? May 10 21:03:06 yates: Probably related - https://lwn.net/Articles/845206/ May 10 21:03:53 This is why you generally do not want to be careful when using the kernel version macros. May 10 21:04:01 s/do not/do/ May 10 21:04:09 yates. yes, that's where it would come from, and using whatever KBRANCH is set to. May 10 21:06:03 ok May 11 00:04:41 we keep the linux-libc-headers package on out latest reference version of the kernel, e.g. 5.10, is there a scenario where these wont work with an older kernel? even if theyre only used to build the cross compiler May 11 00:13:24 is it possible to use something like ${ARCH} in a variable assignment like RDEPENDS_${PN}? e.g. PREFERRED_VERSION_cargo-bin-cross-${ARCH} = ... (using meta-rust-bin) May 11 00:14:56 it doesn't seem to expand as i'd expect in the `bitbake -e` output and just changing it from one version to another doesn't seem to cause any tasks to rebuild May 11 00:23:49 and ... i had the wrong variable :p TARGET_ARCH not ARCH :) d'oh May 11 02:39:33 alejandrohs: there are some scenarios, but not really anything viable when talking about the toolchain and libc. May 11 02:59:10 alejandrohs there have been issues in the past were the glibc or compiler didn't understand some of the most recent kernel header changes.. May 11 02:59:16 it happened a lot more in the past though then recently.. May 11 02:59:28 It's more likely that someone will define a minimal kernel version (matching the headers) and that would restrict where the glibc will run against May 11 02:59:39 (other then the minimum kernel version issues, it's been many years since I've seen it not work) **** ENDING LOGGING AT Tue May 11 02:59:57 2021