**** BEGIN LOGGING AT Fri Jun 12 02:59:58 2020 Jun 12 03:47:20 Where are the different yocto variables just like ${ TOPDIR }? Jun 12 03:55:35 I am looking for other known variables that I can use in my yocto for e.g. THISDIR. Jun 12 03:58:19 oe-core/meta/conf/bitbake.conf defines most key ones. but there's no defintiive list, any recipe can use any recipe it wants. Jun 12 06:39:44 What are the different functions available in the yocto just like - do_compile_append? Jun 12 06:39:49 I am looking for the different functions available in the yocto just like do_compile_append()? Jun 12 06:58:32 SJH22: those 'functions' are called tasks, see here https://www.openembedded.org/wiki/List_of_Executable_tasks Jun 12 07:00:33 https://www.yoctoproject.org/docs/current/dev-manual/dev-manual.html#extendpoky Jun 12 07:38:02 quit Jun 12 08:12:25 where can I find the generated Kconfig file to see what is build into my kernel ? Jun 12 08:45:16 guest37: bitbake -e virtual/kernel | grep ^B= Jun 12 08:47:26 mckoan thanks ! where can I find a list of bitbake commands like this that might help me out ? Jun 12 08:49:23 <[Sno]> RP: I have the perl install / clean loop running for 2 days meanwhile - different architectures, EWONTFAIL :( Jun 12 08:49:51 guest37: https://wiki.koansoftware.com/index.php/Bitbake_options Jun 12 08:52:13 mckoan thanks bro! Jun 12 10:33:30 [Sno]: typical :(. I wish we could understand what is needed to reproduce :/ Jun 12 10:33:59 [Sno]: I appreciate the effort trying. Together all these intermittent fails are a real pain :( Jun 12 10:36:10 <[Sno]> RP: let's keep watching it - maybe we know better what external circumstances trigger it Jun 12 10:36:47 <[Sno]> e.g. - all my build systems use ext4 on attached mass storage (hdd, ssd) - no network file system or ccache or distcc Jun 12 10:36:48 [Sno]: right, all we can do really Jun 12 10:37:03 [Sno]: its probably system load dependent Jun 12 10:39:27 <[Sno]> Hmm, maybe - I use one buildjob per host and ncpus*2 make-jobs and ncpus*2 bitbake tasks Jun 12 11:57:04 RP: Is the test_yocto_source_mirror failure blocking anything? Jun 12 12:01:26 paulbarker: its causing all autobuilder tests to fail atm Jun 12 12:01:53 not "blocking" but serious as we can't get a green build Jun 12 12:02:11 anyone managed to build gccgo? Jun 12 12:02:26 RP: ok, let me see what I can do this afternoon. Got an idea where the issue lies Jun 12 12:02:33 I just get | ../../../../../../../../work-shared/gcc-10.1.0-r0/gcc-10.1.0/libgcc/../gcc/tsystem.h:87:10: fatal error: stdio.h: No such file or directory Jun 12 12:03:34 paulbarker: thanks. If you don't have time at least share the hints as I'll probably have to look into it Jun 12 12:04:23 RP: Will do Jun 12 12:28:41 Hello, I am trying to use http://downloads.yoctoproject.org/releases/yocto/yocto-1.8/machines/qemu/qemux86/core-image-sato-sdk-qemux86.ext4 image in a qemu vm. This image has gcc, make, etc. but not cmake. Also when trying to use dnf it doesn't work because there are no repositories. I tried searching for repositories, but didn't find any. How do I Jun 12 12:28:42 install cmake on yocto? Jun 12 12:30:25 Other methods suggest building the image on my own and adding a layer that contains cmake as far as I understand. But I'd like to avoid that. Is any other way possible which doesn't require me compiling the image myself? Jun 12 12:38:34 flampy: https://stackoverflow.com/questions/41964891/yocto-sdk-with-cmake-toolchain-file Jun 12 12:42:18 or if you need cmake in the target image add in local.conf IMAGE_INSTALL_append = " cmake" Jun 12 12:42:54 mckoan: isn't the method suggested in that stackoverflow answer also require me building the image from source unlike downloading the one from the website? Jun 12 12:44:25 flampy: yes sorry, I didn0t notice that you were using a prebuilt image Jun 12 12:46:52 flampy: so you have to rebuild the image Jun 12 12:50:28 flampy: What is your use case for that image? It's an old release which is no longer supported and in general Yocto Project is about building images yourself from source Jun 12 12:53:07 paulbarker: I am using it as rootfs for qemu which I am using as a kernel development environment Jun 12 12:53:16 similar to this : https://linux-kernel-labs.github.io/refs/heads/master/info/vm.html?highlight=sato Jun 12 12:56:00 flampy: I think those instructions are pointing you in the wrong direction. There are other, much better options for a VM image in which to do kernel development Jun 12 12:56:29 flampy: Choose your favourite Linux distro and run that in a VM instead Jun 12 13:06:14 paulbarker: I see, hmm. For more context, I am using yocto to act as host pc for a dsp emulator. Jun 12 13:06:22 architecture diagram : https://www.alsa-project.org/main/images/4/4e/Heterogeneous-vm.png Jun 12 13:06:38 wiki : https://www.alsa-project.org/wiki/Firmware section "Using the Qemu DSP emulator" Jun 12 13:07:42 so maybe there might be relevance in using yocto since i read from website it is used in embedded systems, etc. I am a student trying to contribute to this firmware and getting acquainted with the setup. Jun 12 13:56:34 RP: I found the issue, may need a bit of a sanity check on how to resolve it Jun 12 13:57:55 git-submodule-test has bitbake as a submodule twice with different commits. The fetcher runs for each instance. On the first instance it's downloaded correctly Jun 12 13:58:26 On the second instance ud.clonedir already exists so try_premirror returns False and pre-mirrors aren't even tried Jun 12 13:58:32 https://git.openembedded.org/bitbake/tree/lib/bb/fetch2/git.py#n321 Jun 12 13:59:30 As pre-mirrors are skipped it goes straight to checking upstream and fails out as it's an untrusted URL in this test Jun 12 13:59:31 https://git.openembedded.org/bitbake/tree/lib/bb/fetch2/__init__.py#n1703 Jun 12 14:00:12 There are definitely repositories out there that list the same URL with different commits for different checkout locations. So the test sounds valid, but the behavior doesn't. Jun 12 14:00:25 There's two ways to fix it Jun 12 14:01:15 What I THOUGHT was happening (at least in the past) it would check the local downloads, see that the commit was or was not present, and then proceed through the premirrors, upstream, mirrors. I thought that had been working in the past. Jun 12 14:01:17 1) Fix it in the gitsm fetcher, in needs_update() we can check if the desired commit is present instead of just checking the bitbake.srcrev config option Jun 12 14:01:51 The gitsm fetcher should be calling the git fetcher (it was in the past) to do that validation. Perhaps that got dropped, or there is another corner case that was missed Jun 12 14:01:54 fray: It won't look at the local downloads unless the download method actually gets called. Premirrors is the first opportunity for that Jun 12 14:02:33 Ya, the download method itself shold be called for each of the submodules, even if it's a directory that already exists.. (need to know if it's been processed or not) Jun 12 14:02:38 Git.download() isn't being called at all for the second instance as premirrors are skipped Jun 12 14:03:16 The other possible fix... (2) fix Git.try_premirrors() so that it doesn't return False if the clonedir already exists Jun 12 14:03:33 gitsm itself shouldn't have any knowledge of any premirrors, mirrors, etc.. Jun 12 14:03:34 I'm not sure on what other impacts changing Git.try_premirrors() behaviour will have though Jun 12 14:03:44 it was intended to -always- call git itself, which has that knowledge Jun 12 14:04:09 fray: That's correct Jun 12 14:04:38 try: Jun 12 14:04:38 # Check for the nugget dropped by the download operation Jun 12 14:04:38 known_srcrevs = runfetchcmd("%s config --get-all bitbake.srcrev" % \ Jun 12 14:04:38 (ud.basecmd), d, workdir=ud.clonedir) Jun 12 14:04:38 if ud.revisions[ud.names[0]] not in known_srcrevs.split(): Jun 12 14:04:39 return True Jun 12 14:04:44 thats what need_update should be doing here.. Jun 12 14:04:56 and why git.need_update is called just prior Jun 12 14:05:20 fray: That's solution (1) of the 2 I said above Jun 12 14:05:52 Ok, then I don't understand what isn't right then.. Jun 12 14:05:54 Are you saying Git.try_premirrors() is behaving correctly and we should fix this in Gitsm.need_update() Jun 12 14:06:15 No, I'm saying that gitsm looks right to me. So any bugs would be in git fetcher. Jun 12 14:06:28 Ok Jun 12 14:07:04 Do we have any tests for the situation where you have two recipes both pointing to the same git repository but different commit ids (same thing as what gitsm is doing, but at a recipe level instead of a submodule level) Jun 12 14:07:19 fray: We do now and it's failing Jun 12 14:07:28 Oh no Jun 12 14:07:29 Sorry Jun 12 14:07:45 At a recipe level I'm not sure there's an individual test but it shouldn't fail Jun 12 14:07:55 The issue here is caused by Gitsm for sure Jun 12 14:08:11 I -think- I've seen issues in the past with two individual recipes pointing to the same upstream src_uri but different srcrev that have had problems, but I'd never been able to fully track it down Jun 12 14:08:14 Maybe I just need to explain it better Jun 12 14:08:58 So what th system should be doing (gitsm) at a high level is simply iterating (recursively) over each submodule it finds. Each iteration is a 'new' feetch (recusive call back to gitsm, which in turn calls git fetcher function) Jun 12 14:09:21 So theoretically it should work in exactly the same way as if someone had specified individual src uris and srcrevs instead of using gitsm. Jun 12 14:09:41 fray: I'll walk through what's happening Jun 12 14:10:34 For our second instance of the same source repo, Gitsm.needs_update() looks for the bitbake.srcrev 'nugget' set by the download operation. That contains the srcrev used by the first instance but does not yet contain the srcrev we want so it returns True (it needs updating) Jun 12 14:11:13 The fetcher then calls the try_premirrors() function on that fetch instance. As Gitsm doesn't override that it falls to Git.try_premirrors() which returns False as the clonedir is already there Jun 12 14:11:31 So the fetcher doesn't try any premirrors. The download method doesn't actually get called at all Jun 12 14:11:43 ok.. so git.try_premirror is only looking for the clonedir and not the contents? Jun 12 14:11:46 It then proceeds to trying upstream but bails out as upstream is not a trusted URL in this test Jun 12 14:11:52 fray: Correct Jun 12 14:12:18 gotcha.. ya, I'd say the bug is git.try_premirror. It should be verifying what is present has the commit Jun 12 14:12:53 fray: If it did that, it would see the commit is there and return False. But Gitsm wouldn't see it as bitbake.srcrev hasn't been set with that commit yet Jun 12 14:13:59 I think Gitsm.needs_update() should be checking if the actual commit is there not looking for some proxy bitbake.srcrev value Jun 12 14:14:28 And maybe Git.try_premirrors() also needs modifying to return True if the clonedir is present but lacks the correct commit Jun 12 14:14:54 Ya, I get what you are saying, but I'm still not sure I understand.. Jun 12 14:15:28 My understanding of try_premirrors is it had two states.. either it downloaded something (or it's present) or it didn't.. true/false.. Jun 12 14:15:38 in the first case, the thing is present, and should just 'work' then Jun 12 14:16:07 as for the 'nugget' that was added, I need to look at the git logs.. I know this was added tos pecifically to avoid a pathelogical case Jun 12 14:16:56 fray: Ok Jun 12 14:17:26 See (poky) 2030e815bb1ba932c58b0c9318f97602fdd4edfb or bitbake 30fe86d22c239afa75168cc5eb262b880886ef8a Jun 12 14:18:06 the problem we were having is that we can't actually look into the download contents in the need_update, because things can run in parallel and we were having race issues.. locking was inconsistent. Jun 12 14:18:11 paulbarker: I think that try_premirror in the git fetcher needs to call clonedir_need_update in that exists() call Jun 12 14:18:17 fray: That makes perfect sense now Jun 12 14:18:25 paulbarker: I'd try that and see if that breaks the tests Jun 12 14:18:36 So instead what we're doing is we should be caching everything we've iterated so that we know if we have to have the git fetcher 'try' Jun 12 14:20:04 I think the git fetcher behaviour with premirrors is a little deliberate though, if we had a clone, try and update it rather than pulling more from the premirror :/ Jun 12 14:20:29 I'm not sure its right or can work like that though Jun 12 14:21:19 Hmm.. ya cause updting something will fail if we can't or are not allowed to access the upstream Jun 12 14:23:00 RP: fray: Sounds like the Gitsm fetcher needs to be able to recurse over the submodules to see if the correct commits are present locally before we try premirrors Jun 12 14:23:16 Hmm, openssl-native seems to be broken on dunfell Jun 12 14:24:17 paulbarker: ah, possibly Jun 12 14:25:20 RP: Basically make Gitsm.need_update() recurse through the submodules and add the bitbake.srcrev values if the commits are present Jun 12 14:25:48 Then return True if it's found every commit is already present Jun 12 14:26:27 paulbarker: I've gotten a bit lost along the way but that sounds right (I think you mean False?). Jun 12 14:26:41 RP: Yes. You're right Jun 12 14:27:05 It's easy to get a bit lost in it. I'll try implementing that and see what happens Jun 12 14:32:24 paulbarker: this is why I started writing fetcher test cases... Jun 12 15:20:42 Ok, sorry, openssl-native isn't broken on the latest dunfell. Openssl 1.1.1g work, Openssl 1.1.1f is broken Jun 12 15:48:41 JPEW: openssl is broken in buildtools-tarball if that helps? :) Jun 12 15:50:23 RP: Heh, not really... I was seeing `openssl verify` fail to verify certificates with Openssl 1.1.1f, what wrong with the one in buildtools-tarball? Jun 12 15:52:36 JPEW: environment isn't set to account for the relocation. sakoman has a patch coming Jun 12 15:55:12 If anyone can spot a better way of doing rename detection in buildhistory please do send patches, I don't like mine... Jun 12 15:57:10 Just sent the buildtools-tarball patch Jun 12 16:05:47 RP: You could do what git does and diff the files to calculate a similarity index Jun 12 16:06:09 If they are less than 50% difference, its the same file, just renamed Jun 12 16:09:34 JPEW: we don't have the files, just lists Jun 12 16:38:16 RP: Ah, OK Jun 12 17:56:19 RP: I'm pretty sure I have a fix, going to re-run all the tests before submitting it Jun 12 17:57:15 There's room for some refactoring & clean up in the gitsm fetcher but I'll come back to that another day Jun 12 18:13:46 paulbarker: sounds great, thanks! Jun 12 18:13:56 paulbarker: getting the autobuilder happy again is a huge help Jun 12 18:15:26 RP: bitbake-selftest is happy with the exception of a few hashserv tests failing because of an ipv6 issue, that will be unrelated to my changes. Now for oe-selftest Jun 12 18:33:45 paulbarker: I'd imagine if bitbake-selftest is good that will have stressed anything in that codepath. I'm happy to take into -next for the oe-selftests Jun 12 18:34:02 paulbarker: those will take a while :/ Jun 12 18:59:58 RP: Ok, let me write some commit messages and send the patches Jun 12 19:16:32 RP: Patches sent. Let's see what the autobuilder thinks Jun 12 19:25:57 paulbarker: thanks **** ENDING LOGGING AT Sat Jun 13 02:59:57 2020