**** BEGIN LOGGING AT Tue Jul 18 03:00:04 2017 Jul 18 05:05:39 hello, is there a way how to build multiple kernels within one yocto build for one board ? Jul 18 06:33:00 morning all Jul 18 08:45:07 How can I change the keyboard layout on my Yocto builds? Jul 18 08:45:22 Post-install fix is ok Jul 18 09:15:26 RP: looks like m4 segfault caused by gold + patchelf combination: https://bugzilla.yoctoproject.org/show_bug.cgi?id=11785 Jul 18 09:15:27 Bug 11785: major, Medium+, 2.4 M3, eduard.bartosh, ACCEPTED , On Debian 9 m4 segfaults making impossible to build autoconf-native due to the uninative feature Jul 18 09:16:05 RP: does it ring any bell for you? Jul 18 09:25:57 ed2: doesn't ring a bell but that combination is something we need to debug then :/ Jul 18 09:31:19 RP: sure. I reproduced it, so no it's easier to do. Was just wondering if there were similar issues before. Jul 18 09:48:06 ed2: We've had patchelf issues but not gold specific ones Jul 18 09:53:57 Hi, I'm trying to debug why I can't create a FIT image with an initramfs. I've specified INITRAMFS_IMAGE to point to my image, this gets built but in kernel.bbclass the variable INITRAMFS_IMAGE_NAME is empty. My python isn't that great but the following line Jul 18 09:54:18 INITRAMFS_IMAGE_NAME ?= "${@['${INITRAMFS_IMAGE}-${MACHINE}', ''][d.getVar('INITRAMFS_IMAGE') == '']}" Jul 18 09:54:28 Should set it, no? Jul 18 10:02:21 ignore the above it was being set, my debugging statement was missing the {} when trying to output the variables. The error is "mv: cannot stat 'arch/arm/boot/fitImage': No such file or directory". I'll dig further into do_bundle_initramfs where the error is. Jul 18 10:03:28 hello, what is the default login/pwd for morty filesystem ? Jul 18 10:04:44 I just pulled in the yocto layer and I am not able to login into rootfs Jul 18 10:06:21 So do_bundle_initramfs after running the second pass at compiling the kernel is expecting the fitImage to be built but it builds fitImage-${INITRAMFS_IMAGE} so it fails at the very end of the task where it renames fitImage to fitImage.initramfs Jul 18 10:06:53 Is this a problem with my kernel? I'm using linux-fslc 4.9? Jul 18 10:07:43 I tried fixing the mv command to mv -f ${KERNEL_OUTPUT_DIR}/$type-${INITRAMFS_IMAGE} ${KERNEL_OUTPUT_DIR}/$type.initramfs and it now fails on do_deploy but it did get me past this error. Jul 18 10:09:39 rburton: does SWAT need to do anything with all of those failures? Jul 18 10:14:25 hi does EXTRA_USERS_PARAMS work in morty release ? Jul 18 10:14:36 joshuagl: nope Jul 18 10:16:37 rburton: ack, I'll update the BuildLog Jul 18 10:29:01 I commented out the following line from do_bundle_initramfs "mv -f ${KERNEL_OUTPUT_DIR}/$type ${KERNEL_OUTPUT_DIR}/$type.initramfs" and it has built and deployed. I'll try it out and see if it works. Jul 18 10:41:56 rburton: any chance to get this patchset merged? http://lists.openembedded.org/pipermail/openembedded-core/2017-June/138771.html Jul 18 10:42:22 rburton: and this one? http://lists.openembedded.org/pipermail/openembedded-core/2017-July/139178.html Jul 18 10:44:21 ed2: just wrangling AB at the moment, they might be in ross/mut2 already. can you check? Jul 18 10:46:57 rburton: nope, they're not there Jul 18 10:48:40 rburton: there are 3 more patches pending, but they're more recent. /boot patchset is sitting under review from Jun 27 :( Jul 18 11:00:30 joshuagl: i broke the AB again Jul 18 11:04:50 rburton: care to be more specific? Jul 18 11:05:11 e.g. which? how? Jul 18 11:05:44 pick a branch name that doesn't exist and the AB is good at not telling you Jul 18 11:05:54 mostly a failing with tgrid and nothing you can solve Jul 18 11:06:08 \o/ Jul 18 11:07:09 rburton: just sounds like you're being careless tbh ;-) Jul 18 11:07:35 * joshuagl removes tongue from cheek Jul 18 11:34:15 I'm getting "xlocale.h not found" errors when building libcxx from meta-clang. Any idea what could be causing this? Jul 18 11:35:09 That's with meta-clang master. Jul 18 11:35:42 new glibc Jul 18 11:36:21 joshuagl: does nightly trigger all the builders at once, or in stages? i fired a new nightly and its not kicked off qa-extras Jul 18 11:38:23 rburton: so meta-clang is unusable with OE-core master at the moment? Jul 18 11:38:45 Is someone working on that? Jul 18 11:39:57 pohly: ask khem Jul 18 11:41:40 joshuagl: ignore me, looking at the wrong builder Jul 18 13:25:52 kanavin, hey Jul 18 13:26:13 do you plan on actually submitting the patches you've done to the dnf stack upstream that you've marked as "upstream status: pending"? Jul 18 13:26:24 because I haven't seen any of those hit the wire yet... Jul 18 13:27:45 Is there a way to have multiple packages share the same fetched source? In other words, a way to optimize it so that all N recipes don't fetch it...just fetch it once and then unpack the portion that's required for each package? Jul 18 13:29:05 tgoodwin: fetched files are cached in DL_DIR anyway so it generally will only fetch once. worst case is N packages all being built at the same time and all fetching in paralllel i guess. Jul 18 13:29:08 I don't think archives work that way Jul 18 13:29:41 tgoodwin: easily solved by having a recipe that just has a SRC_URI and nothing else, and making all other recipes depend on it. Jul 18 13:29:52 rburton: Okay, that's good to know. I also see the myriad of packages all hanging out in their unpack task. Jul 18 13:30:02 unpack != fetch Jul 18 13:30:10 I know, I said "also" Jul 18 13:30:38 you *can* share a unpack tree but thats more work. gcc does it, for example, as that is both huge and the same tarball used many times. Jul 18 13:30:52 i wonder if we can selectively unpack tarballs Jul 18 13:31:35 I think you can if you know the specific file to unpack, sort-of like the subpath of a git repo (that's my situation). Jul 18 13:31:50 Specifying subpath for me at least doesn't seem to help expedite the unpack however. Jul 18 13:33:16 the fetcher doesn't support that by the look of it Jul 18 13:39:25 rburton: Alright, thanks for confirming. My goal was really just trying to speed up the process by avoiding "unpack" repeatedly without resorting to pulling every related package into the same recipe or something. Jul 18 13:39:42 I'll checkout gcc to see how they work it. Jul 18 13:40:20 the tarball must be huge if unpack time is a problem! Jul 18 13:41:06 shouldn't be that much work to extend the unpack logic to support an include/exclude glob Jul 18 13:41:32 document it as a hint, and use tar --include and --exclude Jul 18 13:45:22 rburton: the clone (git) is only 134 M, but I have a dozen or so packages being built from that same source. Jul 18 13:47:35 khem: around? any idea whether core-image-sato-sdk works with musl on pyro? Jul 18 13:52:45 joshuagl: I guess no Jul 18 13:52:49 it works on master Jul 18 13:53:35 joshuagl: there were testing fixes that went into master recently Jul 18 13:54:07 khem: OK, thanks Jul 18 13:54:15 joshuagl: for how long have you been wondering about replacing the job descriptions with something that can inspect the setup, so eg the musl run can look at layer versions itself to decide what to build Jul 18 13:54:20 I guess a backport is possible Jul 18 13:54:23 * joshuagl needs to implement an AB change to only build core-image-sato-sdk on master Jul 18 13:54:45 Does anyone know how I can change the keyboard layout on my Yocto builds? Jul 18 13:55:22 oh, handy. master already has a greater layerversion than pyro Jul 18 13:55:56 rburton: many yaks on that path Jul 18 13:56:13 joshuagl: the world needs a new build framework Jul 18 14:03:29 it sure does! Jul 18 14:03:42 rburton: RP: I could do with pushing a patch to the AB before your next build Jul 18 14:04:01 joshuagl: ok won't fire anything until you say its clear Jul 18 14:04:20 rburton: I assume we want the current builds to finish first? Jul 18 14:04:52 please Jul 18 14:05:44 * joshuagl steps away from the STOP button Jul 18 14:23:06 ed2: do you have a theory as to why i don't see any problems with debian9/uninative? Jul 18 14:24:06 rburton: do you use gold? Jul 18 14:24:28 ed2: not knowingly :) Jul 18 14:25:49 ed2: the issue is caused by patching stripped binary linked by gold. Jul 18 14:26:34 rburton: It should be easily reproducible by export PATH=/usr/lib/gold-ld/:$PATH && bitbake -c cleanall m4-native && bitbake m4-native Jul 18 14:28:03 aaah Jul 18 14:28:15 wasn't entirely paying attention to the comments, good work Jul 18 14:30:33 rburton: can you remind me how to rebuild uninative with changed patchelf? I was trying to patch m4 with native patchelf, but it didn't work out for some reason. Produced non-executable binary. Jul 18 14:30:45 hm Jul 18 14:30:50 bitbake uninative-tarball, iirc Jul 18 14:31:22 then you can set UNINATIVE_URL to point at it, see meta/conf/distro/include/yocto-uninative.inc Jul 18 14:31:33 rburton: thank you. will try Jul 18 14:35:18 joshuagl: also holding off Jul 18 14:50:40 Is there an appropriate/correct way to patch a patch dynamically? Use case: I have a patch file that needs to get updated based on the user's PACKAGE_ARCH, so I setup a task to run after unpack and before patch to "patch" a patch file with the value of that variable. The task runs and the ${WORKDIR}/patchfile.patch is updated, but the patch that gets applied is the original. Jul 18 14:53:11 tgoodwin: rather than modifying the patch I would suggest modifying the file that the patch patches instead Jul 18 14:53:30 have the patch insert a placeholder if it helps Jul 18 14:54:02 bluelightning: That's actually what the patch does, but across multiple files. Jul 18 14:56:31 bluelightning: I was more just confused to see original copies of the patch file end up over in the ${S}/patches and ${S}/.pc directories after my patch task already ran. I don't see anything in the task listing that runs ahead that would have done this. Jul 18 14:56:57 tgoodwin: that's courtesy of quilt I believe, which is used by default to apply the patches Jul 18 14:57:29 tgoodwin: I'd modify the original patch to fetch data from a file or I'd use something like SRC_URI += "file://patch-blabla-${PACKAGE_ARCH}.patch" Jul 18 14:58:48 aratiu: or alternatively, we do actually support machine/arch subdirectories for files by default, you just need to create subdirectories and put the desired alternate files in each one Jul 18 14:58:59 (works via FILESOVERRIDES) Jul 18 14:59:05 So it's not actually using the files that were unpacked into ${WORKDIR} Jul 18 14:59:22 tgoodwin: oh, it does, but you're probably modifying them too late I would suspect Jul 18 15:00:10 bluelightning: that's neat and sounds really useful, can you point me to a recipe which uses machine based subdirectories? Jul 18 15:00:12 I just stepped through the tasks one by one, mine happens before "patch" occurs and it does modify the ${WORKDIR} version. Jul 18 15:00:42 But that patched version in ${WORKDIR} doesn't get copied to ${S}/patches or ${S}/.pc prior to actually patching. Jul 18 15:01:10 The log.do_patch says "applying patch..." and then provides the path for my_recipe/files/thepatch.patch" Jul 18 15:01:22 aratiu: meta-yocto-bsp/recipes-bsp/formfactor (it's a bbappend, but it demonstrates how it works) Jul 18 15:01:46 bluelightning: thank you! Jul 18 15:06:01 bluelightning: I owe you a beer :) Jul 18 15:12:38 https://patchwork.openembedded.org/patch/139086/ says "If you need the users, you add a dependency on the tools in the recipe and they'll be added." Jul 18 15:13:02 can someone tell me what are those tools? Jul 18 15:15:08 should I do inherit useradd in the recipe which tries to reference a user/group, not only the one which adds it? Jul 18 15:18:19 bluelightning: can you please elaborate on 'too late' comment? Jul 18 15:22:23 tgoodwin: I'm making a guess - you'd have to look at meta/lib/oe/patch.py to see how it actually works Jul 18 15:23:41 tgoodwin: bear in mind you are trying to do something that hasn't really been done as far as I'm aware and thus you'll probably be fighting existing assumptions Jul 18 15:32:34 bluelightning: fair enough. I've been going over that script, the run script, etc. which is why I've asked. To me, it looks like it's caching the patch files directly from whatever is in SRC_URI to some other temporary location before applying the patches. Those copied over during unpack aren't actually being used for patching which goes against what I was expecting. Then again I could just be misunderstanding it. Jul 18 15:33:51 yes, do_patch copies the patches into the workdir Jul 18 15:33:55 so you can't patch a patch Jul 18 15:35:19 rburton: okay, so it copies them a second time? Jul 18 15:35:24 (since the first time would have been from do_unpack) Jul 18 15:35:57 do_unpack copies them from the layer into ${WORKDIR} Jul 18 15:37:03 lol and amusingly the quilt patcher then symlinks to the layer files for the actual patching :) Jul 18 15:38:16 ha Jul 18 15:38:36 rburton: thanks for confirming I'm not (totally) nuts. Jul 18 15:38:45 that's almost certainly my fault, iirc it did that to ease the ability of the patch resolver to update the patches in the layer, or something, from back when the patch resolver was a thing Jul 18 15:38:53 should probably fix that now Jul 18 15:39:25 honestly all of oe.patch/patch.bbclass is just horribly ugly.. i hate looking at old code, particularly my own Jul 18 15:39:26 kergoth: i've been wanting to rip out the patching code, ever since i thought "lets just move ${S}/patches/ to ${WORKDIR}/patches" and then went insane Jul 18 15:39:35 For now I've gotten around it by moving my patch-patcher to after do_patch and instead just doing a recursive sed to replace a unique symbol with the necessary variable. Jul 18 15:39:46 rburton: that'd break the handling of patching with subdir=, though Jul 18 15:39:56 admittedly i'm not sure anyone cares about htat Jul 18 15:40:06 but it's a thing, it will use mutiple patches dirs and multiple series files, iirc Jul 18 15:40:38 hm, i wonder if anyone uses that. Jul 18 15:40:45 not a clue Jul 18 15:40:49 never noticed the code change patchdir based on subdir Jul 18 15:41:17 could be i'm remembering the details wrong, but it definitely supported having mlutiple patch roots, not just S Jul 18 15:42:19 i just wanted to stop dropping implementation details into the source tree Jul 18 15:42:41 yeah, i don't blame you there, adds complexity with externalsrc and whatnot Jul 18 15:42:47 or can Jul 18 16:08:01 joshuagl: erm did i fire a new build before you managed to patch the ab? i think i might have Jul 18 16:08:14 rburton: yes, it looks like you did Jul 18 16:08:19 idiot Jul 18 16:10:10 sorry Jul 18 16:10:17 feel free to abort if you want to get the patch in Jul 18 16:10:26 as punishment for my idiocy Jul 18 16:23:49 rburton: I'm going for dinner now so will check back on things later and see how the build is doing Jul 18 17:26:36 joshuagl, rburton: I really could use a test build sometime soon of the server changes :/ Jul 18 18:43:55 test Jul 18 21:04:00 RP: apologies, didn't mean to hold things up Jul 18 21:04:11 RP: go ahead, the changes can wait Jul 18 21:23:58 Hi, does anyone have documentation on how to use dietsplash? Additionally which other splash screen system works well with systemd? Thanks **** ENDING LOGGING AT Wed Jul 19 03:00:03 2017