**** BEGIN LOGGING AT Thu Apr 30 02:59:57 2020 Apr 30 07:16:37 New news from stackoverflow: Yocto SDK with cmake toolchain file Apr 30 10:12:58 rburton, kanavin_home: Any ideas why master-next would be triggering SIGILL in various places? :/ Apr 30 10:13:14 (trying to run native binaries) Apr 30 10:14:25 * RP can't spot the pattern Apr 30 10:16:43 RP: no :( Apr 30 10:19:56 last two master-next builds have issues and I don't know what happened :( Apr 30 10:26:15 RP, https://stackoverflow.com/questions/7901867/what-causes-signal-sigill Apr 30 10:26:20 I like the top answer Apr 30 10:33:31 Hey. Anyway to disable OSTree in Yocto? Apr 30 10:34:05 nacknick: its not used by default? :) Apr 30 10:34:23 For some reason it's not RP Apr 30 10:36:47 nacknick: I'm pretty sure (as the maintainer) that it isn't in OE-Core or the default Yocto layers. That means its being brought in by something you're adding/enabling on top of the base system Apr 30 10:41:10 RP And there is a way to disable it? Apr 30 10:42:23 nacknick: I have no idea how you've added it. Find how it was added and don't do that? Maybe remove the layer for example? Apr 30 10:44:54 Another question please: I know that `INHIBIT_PACKAGE_STRIP_pn-` makes a *package*, Is there a way to make a *single* executable file inside the package to not-stripped? And not all its binaries?? Apr 30 10:45:16 makes a package not strip their binaries** Apr 30 10:45:24 her Apr 30 10:53:36 nacknick: wild guess: what about INHIBIT_PACKAGE_STRIP = "1" in the recipe? Apr 30 10:59:36 kanavin_home: https://autobuilder.yoctoproject.org/typhoon/#/builders/79/builds/889 - can you have a look at the step2d, error: /etc/rpm/macros.perl: line 34: Macro %global is a built-in (%define) Apr 30 11:01:04 kanavin_home: doesn't appears to break anything but is new I think Apr 30 11:14:40 RP: I think it should be largely harmless - the test doesn't do enough to avoid rpm reading /etc/rpm/ from the host, but it shouldn't affect the outcome (which about how rpm compares versions) Apr 30 11:15:03 RP: the places where native rpm is used to package or create rootfs are much more careful about that I think Apr 30 11:16:03 kanavin_home: log files with lines starting "error" just never look very good ;-) Apr 30 11:16:16 it also means those tests aren't deterministic :( Apr 30 11:16:47 kanavin_home: I guess we open a bug just to track it Apr 30 11:16:59 RP: sure Apr 30 11:18:01 RP: it's probably a matter of checking rootfs.py to see how rpm is instructed to look in build/.../etc/rpm that we create, and copy that bit into the test Apr 30 11:20:45 kanavin_home: right, makes sense Apr 30 11:21:08 kanavin_home: I'm going to struggle to get to your next patchset until i figure out this sigill issue :/ Apr 30 11:23:20 RP: yeah, unfortunate timing :( Apr 30 11:23:31 let me know if I can do anything Apr 30 11:24:55 kanavin_home: I just don't know how I'm going to find this, the pattern is too weird to pin down easily :( Apr 30 11:25:06 RP: does master build cleanly? Apr 30 11:26:55 kanavin_home: seemingly Apr 30 11:29:02 kanavin_home: actually, no Apr 30 11:29:37 kanavin_home: dnf returning -4 on master Apr 30 11:35:14 RP: sigill? something native being built using -march=native on a newer machine and then same binaries being reused on a machine without those instructions? Apr 30 11:35:19 both master failures on centos7-ty-2 Apr 30 11:36:27 rburton: possible if something is patching that in somewhere Apr 30 11:38:43 RP: but what is receiving sigill specifically? Apr 30 11:42:09 i'd compare cpu generation of those workers to a rebuild of the same revision on another worker that presumably works Apr 30 11:43:08 kanavin_home: dnf, update-mime-db, xsltproc, wayland-scanner, swig Apr 30 11:43:25 rburton: the cpu flags are quite different on the various workers Apr 30 11:44:06 new in master? Apr 30 11:44:23 rburton: I think this is something in recent master, yes Apr 30 11:54:12 rburton: cpu flag differences, abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm cqm rdt_a rdseed adx intel_pt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local cpuid sdbg fma movbe Apr 30 11:54:43 avx2 and movbe :/ Apr 30 11:58:17 can you replicate on demand? and get the error logs from dmesg? Apr 30 11:59:06 rburton: it probably can be replicated. I do have dmesg Apr 30 12:12:13 Hi guys. I just sent a patch on meta-dpdk and use meta-intel@yoctoproject.org as mentionned in README Apr 30 12:12:46 it looks like you moved to meta-intel@lists.yoctoproject.org Apr 30 12:13:34 rburton: shrx instruction Apr 30 12:13:58 is meta-intel@yoctoproject.org redirected ? On https://lists.yoctoproject.org/g/meta-intel I don't see my patch so far. Apr 30 12:14:43 ebail: you should use meta-intel@lists.yoctoproject.org Apr 30 12:15:42 it looks like I finally see my message. I will add an other patch to modify the README then :). Thanks RP Apr 30 12:17:38 ebail: sounds good, thanks. I guess the redirected mail is just slow Apr 30 12:19:14 qschulz: I just missed what you wrote. Anyway, it has the same affect Apr 30 12:19:41 I still want to not strip a specific binary file inside the package and not all the binaries Apr 30 12:21:46 rburton: and its in update-mine-info itself, not any library Apr 30 12:27:24 RP: patch on README sent. Thanks Apr 30 12:28:35 RP: haswell+ then Apr 30 12:33:26 RP: i'd look at the compile log for the native recipe, see if its doing anything weird. Apr 30 12:33:54 alternatively did we add any new builders? or maybe a new rolling distro has decided to pass march=native implicityl? Apr 30 12:34:20 I have question about uninative, we have some kernel recipe which deploys native tool to DEPLOY_DIR_IMAGE, looking at uninative.bbclass in thud I see that uninative_changeinterp is applied only for native/cross/crosssdk recipes, does anyone have cases like this where uninative_changeinterp needs to be forced for some files from target recipe as well? Apr 30 12:36:19 or maybe I'm missing some part of this puzzle, because the binary shows uninative loader path, but pointing to directory as it was on the builder which created the binary, but the loader doesn't exist in that path on builder where sstate do_deploy is being reused Apr 30 12:38:37 ah it's BUILD_LDFLAGS which set the uninative loader even when the recipe isn't native Apr 30 12:41:32 nacknick: misunderstood sorry, INHIBIT_PACKAGE_STRIP_FILES maybe? Apr 30 12:48:34 nacknick: why do you not want to strip one specific binary? what does that achieve? Apr 30 12:49:34 rburton: over lunch I was also thinking about rolling builders doing something different Apr 30 13:02:26 as temporary work around I'll explicitly use host's loader again Apr 30 13:02:46 qschulz: thanks I will check that Apr 30 13:03:19 JaMa: the assumption is that target recipes don't generate native binaries Apr 30 13:03:30 rburton: I said *I want* to not-strip one file inside a package, and not to do so for all binaries of the same package Apr 30 13:04:03 RP: yes, in ideal world they shouldn't :) Apr 30 13:10:51 rburton: I've put a sentinel into master-next to try and find where this is coming from. My guess of opensuse doesn't appear right Apr 30 13:21:05 I think we need markers in sstate objects so we can know the origin Apr 30 13:48:13 Hi everyone. I have a patch for meta layer, which I've sent to poky@lists.yoctoproject.org . I'm wondering now if it was a correct mailing list for this... Apr 30 13:53:29 UVV: meta in poky is openembedded-core if I'm not mistaken Apr 30 13:56:29 Alright, thank, I'll resend it there Apr 30 13:56:34 Alright, thanks, I'll resend it there Apr 30 14:00:47 Hi all, what is the "best way (TM)" to handle a yocto project for different architectures? Apr 30 14:00:50 Our project is, in it's simplst form, a common layer (let's call it meta-commen) and two architecture/device specific layers (let's call them meta-x86 and meta-arm64). Apr 30 14:00:59 What I'm doing right now, is having two different build directories, one for each machine, with separate bblayers.conf files, including the common layer and the device specific layer. Apr 30 14:01:51 BoJonas: layers shouldn't be destructive. they should be able to live together just well Apr 30 14:01:57 if not, you need to fix the layers Apr 30 14:03:03 But if I have the same recipe in meta-x86 and in meta-arm64, how can bitbake tell which one to use? Apr 30 14:03:22 BoJonas: what is the recipe you're talking about? Apr 30 14:03:53 It could be any recipe. A recipe for setting the hostname for instance Apr 30 14:04:06 BoJonas: FYI, you can use VAR_ or VAR_ or do__ Apr 30 14:04:54 BoJonas: in some cases, you can merge both recipes and have only a few variables or tasks machine/arch specific (don't forget to set PACKAGE_ARCH to MACHINE_ARCH in that case) Apr 30 14:04:55 Ok.. But then I need to make all variables in the recipes "machine specific" right? Apr 30 14:05:09 Hmm.. Apr 30 14:05:31 @BoJonas: What is done differently to read the hostname for x86 and arm? Apr 30 14:06:11 BoJonas: in some others, if the recipes are completely different but have the same "meaning", then virtual recipes/packages are what you're looking for. You can use COMPATIBLE_MACHINE to be sure to never pick the wrong one, and you can pick the provider with PREFERRED_PROVIDER and/or PREFERRED_VERSION Apr 30 14:06:21 One target is named "my-cool-arm-host" and the other is named "this-is-a-x86-host" for instance Apr 30 14:07:03 @BoJonas: OK I see you want to give it different host names depending on what they are build for. Apr 30 14:07:03 BoJonas: you could use a variable available from Yocto and put it in the hostname, or use a variable you would set in your machine.conf file Apr 30 14:08:48 @BoJonas: as qschulz points out I would look at machine.conf. Certainly you will need 2 different BSP layers and 2 different machines to build for. By default the "BSP" name ends up being the hard coded hostname. Apr 30 14:09:33 @RobertBerger yes, the two different targets are two completely different products, but are "kind-of" running the same application (with very different configs). Thats why we currently have these machine specific layers, and only include the one we need Apr 30 14:09:53 but you should be able to build with both layers in bblayers.conf Apr 30 14:10:01 Ok, I'll have a look at machine.conf Apr 30 14:10:36 Can I make a layer machine-specific? Apr 30 14:10:56 @BoJonas: That's what is called BSP layer Apr 30 14:11:49 Doh!! Now I understand! Of course Apr 30 14:11:59 BoJonas: a BSP layer is a layer for all recipes required to boot the device. kernel, u-boot, machine configuration file. Maybe a few **very** specific recipes but it should be very small IMO Apr 30 14:13:05 then, if your recipe has a different configuration depending on the machine, that is something you could set directly from within the recipe by using overrides Apr 30 14:13:34 Ok, i see Apr 30 14:13:51 @BoJonas: here is a machine.conf file https://gitlab.com/meta-layers/meta-multi-v7-ml-bsp/-/blob/master/conf/machine/multi-v7-ml.conf Apr 30 14:14:49 this is one for various boards ;) Apr 30 14:14:56 armv7 based boards Apr 30 14:14:56 Yes ok, I see that I need to move more of my configuration out of the recipes and into machine.conf instead Apr 30 14:16:06 you need to understand the concepts of distro, machine and image Apr 30 14:16:27 BoJonas: indeed and make use of machine/architecture overrides in some cases where it's not possible to put it into the machine.conf Apr 30 14:17:11 The same image can be built for different machines. e.g. core-image-minimal for your arm and x86 boards Apr 30 14:17:17 @RobertBerger, I think you are right.. What we do now is that we have our "main-image.bb" in the meta-common layer and then have "main-image.bbappend" in the device-specific layers Apr 30 14:17:19 BoJonas: as suggested by RobertBerger.. you can have a look at https://www.youtube.com/watch?v=o-8g0TPVVGg Apr 30 14:17:40 BoJonas: yup, that seems wrong :) Apr 30 14:17:55 @qschulz thanks i'll give it a look Apr 30 14:19:33 @BoJonas: But don't you fear: You would not be the first one who made such mistakes :) I saw this many times. Apr 30 14:20:36 Everyone makes mistake, everyone learns... for ever and ever :) Apr 30 14:20:39 I have been using Yocto in different companies since beginning of 2015, and everyone seems to have misunderstood the basic concepts, and try to "invent" their own way of doing things like this. Apr 30 14:21:08 That's why I want to try to do it the right way this time Apr 30 14:23:04 @BoJonas: Well unfortunately there are many degrees of freedom with OE/YP. Apr 30 14:24:10 @RobertBerger, yes i agree .. it's both good and bad Apr 30 14:24:48 But I still think that Yocto is fantastic Apr 30 14:27:03 Another thing .. can I build for multiple targets in a single bitbake command? Apr 30 14:27:16 Sorry, not targets, machines Apr 30 14:29:32 I mainly need this for fetching all the sources for our different devices. Right now, we run a bitbake --runall=fetch for every target, but if I have all my layers included in my bblayers.conf, i guess that it will fetch for all machines, right? Apr 30 14:30:09 ... Hmm, thinking about it, no I guess it would only fetch for the machine specified Apr 30 14:45:51 BoJonas: the multiconfig feature is the only way to do that Apr 30 14:46:19 RP: I am seeing error: 'FALSE' undeclared in following two recipes https://errors.yoctoproject.org/Errors/Build/102058/ before I delve into them has something change in oe-core/master-next Apr 30 15:02:43 @BoJonas: Look at multiconfig Apr 30 15:03:37 @BoJonas: https://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#dev-building-images-for-multiple-targets-using-multiple-configurations Apr 30 15:03:43 RobertBerger: that's a bit overkill for only fetching packages :D Apr 30 15:04:41 @qschulz - yes of course just for fetching it's overkill Apr 30 15:05:17 @BoJonas I would unify as much as possible and, as you said, it's just different configurations with the same app design it like this. Apr 30 15:05:46 @BoJonas: like this you have the same sources and you could use a download cache for all of them Apr 30 15:18:35 Thanks!! Apr 30 15:33:06 khem: not that I know of. We are seeing corrupt sstate with SIGILL being triggered from native binaries all over though Apr 30 15:36:02 any meta-rust users/devs here? Have anyone tried to pass something like ${@oe.utils.parallel_make_argument(d, '-Ccodegen-units=%d', limit=64)} to rust-native bootstrap? It takes really long as shown by https://github.com/shr-project/test-oe-build-time so I'm trying to make it more parallel Apr 30 15:40:14 JPEW: hashequiv is totally undermining my attempts to debug this SIGILL issue :) Apr 30 15:48:19 New news from stackoverflow: why does autostart with systemd doesn't work Apr 30 16:01:23 RP: Ya, I would expect it would Apr 30 16:06:16 JaMa: its building full llvm underneath so long times are expectd Apr 30 16:07:26 khem: isn't that in rust-llvm-native? I see most time spent in single threadded "rustc-1.37.0-src/build/x86_64-unknown-linux-gnu/stage0-tools-bin/fabricate generate" Apr 30 16:11:14 it's rust-native.do_compile which takes 20 minutes, even more than chromium-x11.do_compile in some cases Apr 30 16:15:45 yeah fabricate it trouble to run in parallel Apr 30 16:18:24 New news from stackoverflow: Why does autostart with systemd not work? Apr 30 16:40:50 Am I the only one who sees NOTE: Retrying server connection (#8)... Apr 30 16:40:51 ERROR: Unable to connect to bitbake server, or start one (server startup failures would be in bitbake-cookerdaemon.log) caused by "OSError: [Errno 98] Address 'hashserve.sock' is already in use" with dufell? Apr 30 16:45:47 O Apr 30 16:45:59 er I'm seeing the retrying connection thing all the time now Apr 30 16:46:03 haven't seen the hashserve message though Apr 30 16:46:19 need to try to bisect Apr 30 16:48:58 @kergoth - not good ;) Apr 30 16:50:21 @kergoth: I CTRL+C, bitbakes are not killed, I kill them manually and I get this problem - need to manually remove hashserve.sock to bitbake again Apr 30 16:51:10 @kergoth: currently running a test where I did CTRL+C and no kill to see if a new bitbake run will fix it Apr 30 16:52:28 @kergoth: the hashserve message is in a log file: bitbake-cookerdaemon.log Apr 30 16:53:11 @kergoth: bitbake does not fix it if I don't kill them either Apr 30 16:54:23 @kergoth: I guess someone should clean up hashsrv.sock by default? Apr 30 16:57:28 Iirc it happens with bitbake in memres mode Apr 30 17:01:41 @dl9pf not only. I don't use memres mode, at least I don't intend to ;) Apr 30 17:02:03 @dl9pf is it on by default now? Apr 30 17:02:21 No Apr 30 17:04:03 kanavin_home: tracked this down and it is buildtools-tarball that is breaking things Apr 30 17:07:52 RP: :( how come? Apr 30 17:08:24 kanavin_home: I think the gcc in buildtools-tarball-extended has some kind of --with-arch=native default Apr 30 17:08:57 RP: oh, so it's not about gomp? phew :) Apr 30 17:09:10 kanavin_home: I just know if I have a clean setup and force a build of shared-mime-info-native, it only breaks when buildtools-tarball-extended is in play Apr 30 17:09:34 kanavin_home: I'm not blaming gomp, this new toolchain does seem to have issues though Apr 30 17:09:53 whether host cpu flags leaked into the buildtools, I don't know Apr 30 17:10:10 some handle on where to look is progress I guess Apr 30 17:18:35 New news from stackoverflow: Remove ROS from Yocto Bitbake to Reduce Image Size Apr 30 18:03:01 sakoman: I'd hold off builds for now. I know what the issue is, just need time to fix it and upgrade buildtools on the AB Apr 30 18:29:16 halstead: ping Apr 30 18:31:39 halstead: why did List-Id: change on lists.yp.org on/around Apr 25? Apr 30 18:33:53 good question, it broke half my filters :) Apr 30 18:33:58 guessing the groups.io conversion? Apr 30 18:41:06 denix, kergoth April 25th is after migration. I'll check the groups.io product update history and see if I can find a reason. Apr 30 18:42:20 kergoth: yeah, same here, notably patchwork Apr 30 19:08:45 RP: I figured, its the json-c upgrade patch in oe-core which is causing this issue, please hold it off Apr 30 19:13:29 halstead: my guess is the change correlates with ndec changing the reply behavior of the lists Apr 30 19:14:39 smurray, I'll check if that is it. Lots of product changes went in on the 24th but none called out a list-id change. Apr 30 19:18:50 khem: actually its a patch from this khem guy ;-) Apr 30 19:19:04 khem: http://git.yoctoproject.org/cgit.cgi/poky/commit/meta/recipes-devtools/gcc/gcc-target.inc?id=d566448b3d7b2fe3e9743795a2ef4bdc2b4d06a4 Apr 30 19:20:09 smurray, It appears the change was global to the platform not a setting we altered. Apr 30 19:20:39 halstead: ouch, that seems like a misfeature on the part of groups.io Apr 30 19:20:59 At least it wasn’t me ;) Apr 30 19:21:03 heh Apr 30 19:21:20 ndec: you are off the hook for now... :) Apr 30 19:21:26 khem: we didn't realise gcc-target has implications for nativesdk-gcc Apr 30 19:21:47 * smurray hopes filtering on ^Mailing-List is going to prove stable Apr 30 19:22:39 smurray: how standard that field is? Apr 30 19:24:07 denix: I'm unsure, tbh. I went back and checked and it's not present in older messages, so it might not be Apr 30 20:03:37 I thought List-ID was the stable one? Apr 30 20:06:30 khem: I understand your comment btw and will hold off the json-c patch for now Apr 30 20:07:24 khem: we were cross talking issues :) Apr 30 20:08:39 RP: it is, but our List-Ids got changed/renamed last weekend on groups.io (see above) :( many filters got broken. our internal patchworks now misses patches Apr 30 20:11:00 smurray: maybe you can use ^Mailing-List for procmail or other filters, but our ancient Patchwork seems to rely solely on List-Id :( Apr 30 20:11:08 denix: hmm, I wonder why mine still work Apr 30 20:11:58 RP: it is now like this - List-Id: <67612.openembedded-core.lists.openembedded.org> Apr 30 20:12:28 RP: used to be List-Id: Apr 30 20:12:30 i'm guessing exact match vs search, gmail's filters are often the latter, so it doesn't have to be exact Apr 30 20:13:00 denix: I got lucky with where I put wildcards in my procmail files! Apr 30 20:13:45 RP: yeah, I toyed with adding some wildcards, if Mailing-List proves problematic, I'll switch to doing that Apr 30 20:15:21 RP: what issue did you run into with nativesdk ? Apr 30 20:15:23 I've pushed ABI bumps into master-next so its rebuilding everything with the new buildtools on the appropriate workers Apr 30 20:16:12 khem: if you use buildtools-tarball-extended and build with arch=native, it means the binaries won't run on other workers using uninative but hit SIGILL Apr 30 20:17:28 khem: I was busy trying to find which crazy distro had enabled arch=native then found it was our buildtools :) Apr 30 20:31:36 Hey guys, I've been working with yocto for a bit now and was wondering something. My goal is to use yocto to build images for product with a web server interface (node/react). What would be the recommended procedure getting that project code on the system and up and running? Apr 30 20:33:01 perhaps just a recipe with an empty do_compile() stage? Apr 30 20:34:58 smrbz: that would work, there are other recipes which do this for config files and similar Apr 30 20:36:31 denix, smurray, The list-id change has been rolled back. It was added in attempt to assist with several e-mail providers spam policies. After testing the change wasn't needed. Apr 30 20:37:00 @RP great, that was my thinking given that there's nothing really to compile and the software recipe would give me a more formal means of putting code in the right location Apr 30 20:37:27 halstead: thanks for the heads up! Apr 30 20:37:52 denix, If you changed your filters you will need to adjust back. Apr 30 20:38:40 halstead: great, thanks! Apr 30 20:50:27 RP: extended tools tarball will use gcc target runtime right I guess this change was prior to extended tools tarball was a reality Apr 30 20:59:01 RP: do we have a writeup on extended toolchain tarball generation ? I am interested in creating one to build morty Apr 30 20:59:18 looking for building morty on ubuntu 18.04 Apr 30 21:03:00 khem: I think the use case for the tarball is supporting modern yocto on ancient distros (by providing modern native toolchain), not the other way around Apr 30 21:03:30 khem: if you need to build ancient yocto on a modern distro, your best bet is a ubuntu 14.04 docker container which is very easy to install Apr 30 21:09:00 I know that but it works both ways Apr 30 21:09:21 khem: "bitbake buildtools-extended-tarball" :) Apr 30 21:09:36 RP has done builds for sumo Apr 30 21:09:45 kanavin_home: it does work both ways Apr 30 21:09:46 RP: heh Apr 30 21:10:42 khem: http://git.yoctoproject.org/cgit.cgi/poky-contrib/log/?h=rpurdie/pyro was my test of backporting to build an older tarball Apr 30 21:10:52 khem: its obviously out of date now Apr 30 21:39:37 Hmm, tests failed in deployment :( Apr 30 22:40:52 "As new processors are deployed in the marketplace, the behavior of this option will change." Apr 30 22:51:36 RP: a case of extreme CYA? Apr 30 22:52:27 denix: no, the behaviour of gcc is now depending on the system you run it on :/ Apr 30 23:12:33 * RP gives up and will have to look at this more tomorrow Apr 30 23:19:38 New news from stackoverflow: bitbake rpi-test-image with MACHINE=raspberrypi3-64 failing to build on the zeus release of yocto May 01 00:02:44 So if I have a an sstate cache. Does the deploy step get skipped? May 01 00:03:37 Or what step gets skipped and replaced with sstate? May 01 00:59:40 rabbit9911: sstate saves up the task artifacts, evaluates if something has changed by a signature and does not execute the task if it already has the artifacts it would produce May 01 01:41:25 I added a task after do_deploy that copies ${B}. But it does not get ran unless its a clean build May 01 02:35:17 rabbit9911: B is almost empty when rebuilding and its reused from sstate **** ENDING LOGGING AT Fri May 01 02:59:57 2020