**** BEGIN LOGGING AT Mon Apr 08 02:59:57 2019 Apr 08 03:08:41 ermmm whys my thud image installing both python2 and python3 ??? Apr 08 06:01:35 The image.bbclas defines do_fetch[noexec] = '1' Apr 08 06:02:19 Is there a way to re-enable fetch in my recipe (so SRC_URI = "file://foo") copies foo to ${WORKDIR} ? Apr 08 06:02:44 Or do I need to re-invent fetch for my recipe? Apr 08 06:04:40 (I.e. reuse base_do_fetch() in python my_fetch () { base_do_fetch() } ) ? Apr 08 06:13:26 or better base_do_unpack() ? Apr 08 06:56:18 good morning Apr 08 07:10:13 mckoan: depends on your definition Apr 08 07:20:14 RP: I already use a "files" directory, and was hoping to use a single layer for multiple poky releases (here we have patches to connman, which have to be changed to apply to new versions) Apr 08 07:21:53 I'm also looking for an idiom where I get an early failure if I forget to adjust one change for a new poky release. The %.bbappend catch-all combined with a necessary per-version patch dir just sounded like a perfect fit Apr 08 07:29:07 Hi, folks! Apr 08 07:29:09 LetoThe2nd: c'mon be positive :-D Apr 08 07:30:45 I am searching for a standard way to solve my problem. I have one tricky recipe from BSP layer that uses multiple repositories. So, I can patch only main one with Yocto's standard way. Apr 08 07:31:33 I think it would be great to create separate recipes for those subprojects Apr 08 07:31:59 but the problem is - I need an access to their sources from the main one Apr 08 07:33:30 So, those recipes I want to copy should only do the following job: fetch, patch, populate their sources somewhere Apr 08 07:34:30 Sure I can do it manually, but I am just wondering if there is a standard way (or just less dull) Apr 08 07:39:05 seems like I've already found the solution Apr 08 07:39:06 patchdir - Specifies the directory in which the patch should be applied. The default is ${S}. Apr 08 07:39:45 It's a lot easier than I thought Apr 08 07:41:19 LetoThe2nd, needs some heavy metal Apr 08 07:43:03 mckoan, just scream it at LetoThe2nd Apr 08 07:57:16 armpit: ++ Apr 08 07:57:39 mckoan: actually, i sent some potential customer your way last week Apr 08 08:18:51 hi RP - did not notice you were away Apr 08 08:18:57 RP: I already use a "files" directory, and was hoping to use a single layer for multiple poky releases (here we have patches to connman, which have to be changed to apply to new versions) Apr 08 08:19:02 I'm also looking for an idiom where I get an early failure if I forget to adjust one change for a new poky release. The %.bbappend catch-all combined with a necessary per-version patch dir just sounded like a perfect fit Apr 08 08:20:20 yann: You have a piece which needs immediate expansion (THISDIR) and a piece which does not (PV) Apr 08 08:21:39 yann: try something like MYAPPENDDIR := "${THISDIR}" then FILESEXTRAPATH_append = "${MYAPPENDDIR}/${PV}" Apr 08 08:21:50 the spacing and so on needs tweaking there Apr 08 08:22:17 hm, not far from one of my attempts, but I had missed the immediate-expansion part Apr 08 08:27:26 RP: that does the job well enough, thanks much! Apr 08 08:28:45 RP, i have some sumo-nmut failures that I wont be able to look at until I get home on thursday Apr 08 08:29:38 thud-nmut looks good and is out for review. I send a pull request tomorrow? Apr 08 08:30:55 * armpit what day is today?? Apr 08 08:31:59 armpit: Monday Apr 08 08:32:17 armpit: I saw the thud email, patches looked good thanks Apr 08 08:34:54 * armpit going to be 34C here today.. Apr 08 08:38:56 armpit: couldn't stand that :) Apr 08 08:40:01 i can last an hour of that then I melt Apr 08 08:40:47 Maybe somebody had similar issue Apr 08 08:41:09 to use / copy SRC_URI's file:// to WORKDIR when one uses inherit image.bbclass ? Apr 08 08:41:33 lukma: As I mentioned, it should be enough to delete the noexec flag using delVarFlag Apr 08 08:41:47 The python () { d.delVarFlag("do_fetch", "noexec") } is not enough Apr 08 08:41:52 lukma: something odd is going on if that isn't enoug Apr 08 08:42:49 I need to create separate task and put there: Apr 08 08:42:49 src_uri = (d.getVar('SRC_URI') or "").split() Apr 08 08:42:49 fetcher = bb.fetch2.Fetch(src_uri, d) Apr 08 08:42:49 fetcher.unpack(d.getVar('WORKDIR')) Apr 08 08:43:32 lukma: $ bitbake core-image-sato -e | grep python.do_fetch -C 5 Apr 08 08:43:41 shows python do_fetch () { bb.build.exec_func('base_do_fetch', d) } Apr 08 08:44:01 which implies the function still exists Apr 08 08:44:51 recipes-core/images/foorec.bb', lineno: 16, function: do_get_foo Apr 08 08:44:51 0012:# image.bbclass disabled calls do_fetch[noexec] = 1, Apr 08 08:44:51 0013:# which disables fetcher - so no files are provided via Apr 08 08:44:51 0014:# SRC_URI Apr 08 08:44:51 0015:python do_get_foo () { Apr 08 08:44:51 *** 0016: base_do_fetch() Apr 08 08:44:51 0017: base_do_unpack() Apr 08 08:44:52 0018:} Apr 08 08:44:52 0019: Apr 08 08:44:53 0020:addtask do_get_foo before do_image_foo Apr 08 08:44:53 Exception: NameError: name 'base_do_fetch' is not defined Apr 08 08:45:56 With the -e flag Apr 08 08:45:57 python do_fetch () { Apr 08 08:45:57 bb.build.exec_func('base_do_fetch', d) Apr 08 08:45:57 } Apr 08 08:46:05 It is also there ... Apr 08 08:47:00 you can't call it like that. Please use pastebin for things which are multiple lines Apr 08 08:50:59 RP: https://pastebin.com/VasgFWRZ Apr 08 08:51:08 Now it shall be more concise Apr 08 08:52:35 lukma: bb.build.exec_func('base_do_fetch', d) is not the same as base_do_fetch() Apr 08 08:52:53 lukma: and I still don't see why you would need to add a new task Apr 08 08:56:35 RP: I've double checked - I only do have python do_fetch () { bb.build.exec_func('base_do_fetch', d) } Apr 08 08:57:38 lukma: can you share the code you're trying which sets SRC_URI and deletes the noexec fetch/unpack var flag? Apr 08 08:58:09 lukma: and clearly say what happens or doesn't happen. Its very hard for me to guess what you're doing or what the problem is Apr 08 09:02:37 RP:https://pastebin.com/0G6Rjnuh Apr 08 09:04:11 lukma: well, that won't even parse. You want the "." between d and delVarFlag Apr 08 09:04:33 lukma: you also want a second line under it which is the same but change fetch for unpack Apr 08 09:05:56 RP: With the dot - correct it is in the recipe Apr 08 09:09:06 lukma: what about the second line? Apr 08 09:10:08 lukma: you need both, ie. python () { d.delVarFlag("do_fetch", "noexec") d.delVarFlag("do_unpack", "noexec") } Apr 08 09:14:55 RP: https://pastebin.com/B9tcyNBB Apr 08 09:15:11 RP: It seems like the do_fetch / do_unpack functions are missing Apr 08 09:15:24 or not inherited properly Apr 08 09:16:10 lukma: they're clearly not missing as the -e output shows them. The question is why they're not executing Apr 08 09:16:53 lukma: try "-e | grep do_fetch", it will have more output but perhaps more clues. I'm interested in the tasks output Apr 08 09:16:57 RP: https://stackoverflow.com/questions/51952495/yocto-image-recipe-and-src-uri Apr 08 09:17:17 The question is about core-image, but image.bbclass superseeds it Apr 08 09:17:36 lukma: right, same issue, the noexec Apr 08 09:24:36 RP: So with -e the python base_do_fetch () { is visible and correct Apr 08 09:25:32 What is the difference from running a qemu image with runqemu vs running a wic.wmdk image with qemu manually? The former works fine, but the latter hangs midway in the kernel boot process Apr 08 09:26:38 My ultimate goal is to be able to run the yocto image from a VM (in Azure cloud) Apr 08 09:31:34 sveinse: compare the commandlines? Apr 08 09:32:17 lukma: that wasn't what I asked. I'm interested in the long line started # "{'tasks': Apr 08 09:32:46 lukma: I suspect you have some other layer/class involved which is doing something like a deltask do_fetch Apr 08 09:32:57 lukma: but I simply can't tell Apr 08 09:33:31 RP: https://pastebin.com/iLGwShxv Apr 08 09:34:27 RP: Yep. Not completely comparable thou. One is using the .ext4 image and "injecting" kernel and parameters from the qemu command line, while the other is self-contained with a syslinux boot Apr 08 09:38:01 I plan on getting it from 'runqemu' to standalone qemu to VirtualBox to HyperV to Cloud. Apr 08 09:39:52 sveinse: my guess is the rnd initialization Apr 08 09:40:39 sveinse: at least it was for me when using old script for running qemu builds and vbox Apr 08 09:41:55 lukma: right, so it is running do_fetch but not finding the file? Apr 08 09:42:06 sveinse: and with VirtualBox 6 there is also issue with serial port, syslinux gets stuck, you either need to disable syslinux menu or redirect serial from vbox e.g. to pipe and confirm the menu selection there Apr 08 09:42:23 lukma: is the file /recipes-core/images/image-foo/foo.its ? Apr 08 09:43:54 JaMa: syslinux seems to pass fine, yet it complains both in qemu (using wmdk) and vb about undefined video mode and halts there for 30 secs, but it doesn't get stuck. Apr 08 09:44:25 using uvesafb? Apr 08 09:44:55 RP: Yes, the file is there Apr 08 09:45:07 JaMa: Don't know. Just using out of box config from rocko, so I probably need to configure it Apr 08 09:46:13 lukma: ok, so what does the unpack log say? Apr 08 09:47:24 lukma: I wonder if do_roots[cleandirs] is cleaning up the file before you can see it Apr 08 09:47:26 JaMa: further, one of the last messages I see in the console is "random: crng init done", but I get often 2-3 other lines after that. On vb its "hid-generic: input: USB HID v1.10 Mouse" on qemu its "e1000: enp0s3: renamed from eth0", so I'm guessing I'm not seeing the real lockup cause in either of them Apr 08 09:47:36 lukma: try bitbake image-foo -c unpack -f Apr 08 09:50:01 RP: Hmm.... Apr 08 09:50:11 It is placed in WORKDIR now ..... Apr 08 09:50:35 RP: Let me double check it with sstate cache removed Apr 08 09:51:01 lukma: I suspect the rootfs code is cleaning it up before it starts Apr 08 09:52:30 RP: I suppose that the task do_rootfs () is necessary to use IMAGE_CMD ... Apr 08 09:54:40 lukma: addtask do_stashfile before do_rootfs after do_unpack ? Apr 08 09:54:49 lukma: put the file somewhere safe? Apr 08 09:55:52 Where and how I control the boot parameters and kernel options in a wic.wmdk image? Apr 08 09:56:17 (or any other image that embeds the bootloader and kernel options) Apr 08 09:56:19 RP: It shall be safe in WORKDIR? (as I just use it with IMAGE_CMD_foo mkimage Apr 08 09:56:50 lukma: probably Apr 08 09:58:51 RP: Ok, it seems like the problem was with no re-enabling both d.delVarFlag("do_fetch", "noexec") d.delVarFlag("do_unpack", "noexec" Apr 08 09:59:21 RP: As fetch just "downloads" the stuff and "unpack" places it in ${S} or ${WORKDIR} Apr 08 10:01:02 correct Apr 08 10:01:25 RP: Thanks for explanation and help :) Apr 08 10:01:37 RP: BIG thanks - sounds better :) Apr 08 10:09:00 New news from stackoverflow: Yocto Image Recipe and SRC_URI Apr 08 10:12:37 RP: yes, thanks for pushing the hang issue further Apr 08 10:14:52 kanavin: No problem. I tried to patch out the hanging test but it still showed timeouts on the autobuilder last night so I think there may be further tests that hang Apr 08 10:15:07 kanavin: my local builds are a mess so rebuilding atm Apr 08 10:15:32 kanavin: Your work triggered me to run the bisect whilst watching TV :) Apr 08 10:15:49 RP: right, I did not check if the rest of the tests are okay Apr 08 10:16:18 bisecting poky is time-consuming Apr 08 10:16:38 kanavin: I naively thought I could just disable that one. Regardless the ptest results from the last build are way healthier, the util-linux fixes help too for example (assuming we can fix the other build issue) Apr 08 10:17:38 RP: I also wonder if 5 minutes timeout set by ptest-runner is genuinely too small, even with perfect buffering Apr 08 10:18:04 some of python3 tests take 2 and half minutes to complete, too close to comfort imo Apr 08 10:18:13 kanavin: I think it should be fine, most tests have some output if they sit for that long Apr 08 10:18:28 kanavin: hmm :/ Apr 08 10:18:44 since ross's output reduction patch Apr 08 10:19:13 It seems to be linked how the HDD is enabled in qemu "-drive if=none,id=hd,file=myimage.wic.vmdk,format=raw -device virtio-scsi-pci,id=scsi -device scsi-hd,drive=hd" works, while simply using "-drive file=myimage.wic,format=raw" fails to boot. That is, it seems to boot the kernel properly but fails to jump to userspace boot AFAICS Apr 08 10:19:52 RP: the timeout is easy to adjust via command line from ptest.py though, if needed Apr 08 10:20:04 (copy paste type in the former, no ".vmdk" in it) Apr 08 10:20:17 kanavin: what worries me is some kind of breakage which then has each ptest hang for 15 mins :/ Apr 08 10:20:27 kanavin: a 5 minute timeout is bad enough to lock builds up Apr 08 10:21:29 RP: right, a per-test ability to override the timeout would be good to have. Most tests can have it set to something really minimal like 30 seconds Apr 08 10:21:45 kanavin: agreed, that would be better Apr 08 11:16:49 kanavin: I think you may be right, I think it is timing out on the AB as you say Apr 08 11:16:57 kanavin: its also not getting pass/fail processing correctly Apr 08 11:29:27 rburton: any idea why run-ptest for python3 wouldn't parse any of the results? :/ Apr 08 11:29:35 * RP is wondering about escaping Apr 08 11:30:23 ah, busybox sed verses non-busybox? Apr 08 11:31:33 kanavin: any idea how difficult a per recipe timeout would be? :/ Apr 08 11:39:21 New news from stackoverflow: How to run a shell script while compiling a recipe in Yocto Apr 08 11:40:55 zeddii: I'm trying to figure out the pieces we have left for this release before we can build rc1. qemumips shutdown is one, would we want to sort the tiny kernel and drop 4.18 too? Apr 08 11:44:08 right. I have the config change to merge for mips (I’ll do that this morning here). We could do the -tiny bump, I can do some builds and qemu boot tests, but I’m not aware of what sort of other testing is commonly done with tiny (I’m betting not all that much). Apr 08 11:44:26 I can do those -tiny build tests today as well, if we want to at least give it a go. Apr 08 11:44:30 zeddii: we build and boot it iirc Apr 08 11:44:41 I can do that much. Apr 08 11:44:52 zeddii: not much other than that and I would kind of prefer to get to two kernels to test/release rather than three Apr 08 11:44:54 * zeddii denied another trip this week, so I’m around. Apr 08 11:45:00 agreed. Apr 08 11:45:17 leave it with me today, and I’ll get you a patch for your morning tomorrow (or at least an email describing what broke). Apr 08 11:45:27 zeddii: I'm also trying to get ptest beaten into some sort of shape for release Apr 08 11:45:32 zeddii: thanks Apr 08 11:45:58 no replies on -netdev to your query that I saw. Apr 08 11:46:12 * RP is going to mistype unbuffered in a commit message soon... Apr 08 11:46:21 zeddii: no :( Apr 08 11:46:38 zeddii: did I ask the right things and have the right info? I don't now the networking people really... Apr 08 11:46:43 know Apr 08 11:51:51 the question looked fine to me. I don’t know many of the core devs either in that space. I’ll see if I can wrangle up a known name to jiggle them to looking at it. Apr 08 11:52:05 zeddii: thanks :) Apr 08 12:07:14 rburton: I think http://git.yoctoproject.org/cgit.cgi/poky/commit/?id=9dec37ceef6c1db79e33484cf983629c6601f802 is bust for python3 Apr 08 12:12:01 specifically the sed filtering or the lack of -v addition of -W for py3 Apr 08 12:46:38 RP: what happens? Apr 08 12:46:59 hm Apr 08 12:47:14 i wonder if the -W means it splits to stderr/stdout and the sed messed up the order Apr 08 12:47:23 maybe we should 2>&1 before the sed? Apr 08 12:47:42 *or* try again to write a results subclass that writes messages we're after Apr 08 12:47:55 was wondering if we should unify on something more common like TAP output Apr 08 13:00:58 I'm getting troubled understanding some interactions (in morty): I have a base autotools-based recipe (pulseaudio) makeing use of do_install_append(). In a .bbappend I can add more using do_install_append() myself, but if additionally I add, say, do_install_arm64_append(), the autotools_do_install call does not make it into the final script, only those 3 _append snippets make it there Apr 08 13:01:43 my understanding problem is probably linked to not seeing where the autotools_do_install call itself comes from Apr 08 13:02:03 yann: do_install_append_arm64 Apr 08 13:02:07 order matters Apr 08 13:02:30 kanavin: did you touch openssl ptest? Apr 08 13:02:31 WARNING: core-image-minimal-1.0-r0 do_image_qa: /usr/lib/openssl/ptest/util/opensslwrap.sh is a broken link Apr 08 13:06:48 rburton: d'oh, thx - looks like I can't again wrap my head around this order Apr 08 13:07:02 append and remove go before other overrides Apr 08 13:10:30 I keep imagining func_append_foo overloads func_append, like VAR_foo overloads VAR Apr 08 13:10:51 shadows, rather Apr 08 13:12:31 rburton, I didnt Apr 08 13:12:37 not lately at least Apr 08 13:13:16 RP, Im thinking about it, first thing is where would we actually specify a custom timeout that ptest-runner can read Apr 08 13:13:36 probably a ptest-timeout file next to ptest-run, with a single number in it Apr 08 13:14:30 then it shouldnt be very hard to read that from C and re-assign the timeout variable, jsut before running the test Apr 08 14:21:57 RP: if you haven't yet merged the py ptest thing, just ading @unittest.skip("broken on newer kernels") would be a neater and more pythonic patch Apr 08 14:27:37 rburton: is there a hardware list for new gear ? Apr 08 14:28:00 literally no idea what you mean Apr 08 14:28:27 rburton: meaning ffor vendors with yocto capable hardware Apr 08 14:28:59 OutBackDingo: like a list to advertise your products, or what? Apr 08 14:29:06 yeah Apr 08 14:29:17 LetoThe2nd: right Apr 08 14:29:37 or at least get word out Apr 08 14:30:27 OutBackDingo: send $$$ to the yocto project, become platinum sponsor, and then convice all the other platinum sponsors that its a good idea to use the project as advertising. :-P Apr 08 14:31:01 OutBackDingo: until that happens, only thing is to become yocto project compliant as far as i know Apr 08 14:31:05 hahahhaha Apr 08 14:31:26 well we have thud running on some new pi clone SOM boards Apr 08 14:31:30 having the layer being yp compliant means it goes on a list on the web site Apr 08 14:31:48 OutBackDingo: everybody has something running on something. Apr 08 14:31:53 true Apr 08 14:31:57 see :) Apr 08 14:32:12 i know... mama alwats told me i was nothing special :) Apr 08 14:32:52 its the other way round. we are all special! Apr 08 14:33:13 no, seriously. do the compliancy dance, its the best value for money you can probably get. Apr 08 15:00:04 Hi, booting from QuadSPI flash of an i.MX8 eval board requires to build a bootable image which I must flash at address 0x0 of the flash. Such image consists of certain parameters as header and the actual program which is u-boot. How does yocto support to build such image? I've just seen u-boot is bitbaked, but it's not that bootable image. Apr 08 15:00:58 fbre: if the u-boot build process can support such an image, then you'll probably have to adjust it for your specific hardware. Apr 08 15:01:13 just like generating an spl for some boards Apr 08 15:03:15 LetoThe2nd: What does the abbreviation "spl" mean? Apr 08 15:07:35 secondary program loader Apr 08 15:08:59 ah OK Apr 08 15:09:17 I thought secondary poot loader :) Apr 08 15:19:23 RP: I just poked around a bit, but didn’t see the poky-tiny test build configuration. I’m building core-image-minimal with my distro set to poky-tiny, but wanted to build the same image types as the AB. Apr 08 15:26:30 I'm seeing some variance inbetween builds when using runqemu, where it works one time and not in the next build. The out-of-box kernel of qemux64 is depending that the system drive is a scsi device. And I see that sometimes runqemu sets up the proper scsi virtio, other times it does not (and it fails to start). Any ideas on how to approach that inconsistency? Apr 08 15:27:41 Drop using runqemu and make my own where I control the qemu options? Apr 08 15:30:28 Does Yocto have any generic x64 intel MACHINE, or is qemux64 precisely that? Apr 08 15:31:18 genericx86, perhaps? Apr 08 15:31:25 though that might not be x64, thinking about it Apr 08 15:31:28 * kergoth needs more coffee Apr 08 15:33:36 sveinse: genericx86-64 is that, but the meta-intel intel-corei7-64 is better Apr 08 15:33:44 especially if you're actually targetting intel Apr 08 15:34:35 rburton: yeah, I'm going to run the image in a containerized/VM environment, so something modern is best :) Apr 08 15:34:45 meta-intel is what you want then Apr 08 15:35:46 rburton: thanks, I'll try that. Hopefully I have better luck with other hypervisors then Apr 08 15:35:59 well, technically linux-yocto is ahead of meta-intel ;) Apr 08 15:36:13 pah ;) Apr 08 15:36:42 sveinse: qemu* assumes its running in qemu Apr 08 15:37:32 rburton: yup, figured as much. Actually it works fine in VirtualBox. But not in Hyper-V, due to not having IDE storage in kernel Apr 08 15:37:51 yeah its pretty tuned for what qemu offers for HW Apr 08 15:38:40 Althou not quite straight forward in all combinations of IMAGE_FSTYPES in qemu either, as it varies in how it configures the storage device apparently Apr 08 15:53:53 RP: thanks for tip, I put MULTILIBS="" in local.conf and it make it Apr 08 15:55:25 however I have other problem, looks like building of my receipe stuck on "do_package_qa" stuck , it is quite simple receipe but it is running almost 1h... how to check what it is doing ? Apr 08 16:02:42 looks like it is waiting for some input on /home/voyo/IOX/opencgx-x86-generic-64-4.14-2.4/iox/tmp/work/core2-64-poky-linux/iox-cisco/1.0-r0/packages-split/iox-cisco/package/temp/fifo.54088 Apr 08 16:02:54 anyone have idea what it is about ? Apr 08 16:05:10 voyo: what is process 54088 ? Apr 08 16:06:17 sveinse: no such process Apr 08 16:06:22 (anymore , I suppose) Apr 08 16:10:19 New news from stackoverflow: Odroid XU4 with Yocto and GUI (gtk+3 error wayland-egl not found) Apr 08 16:33:45 When using MACHINE=intel-corei7-64 my image does not parse. It throws a "ERROR: Task do_bootimg in */my-image.bb depends on non-existent task do_image_ext4 in */my-image.bb", yet building core-image-minimal works. I've grepped the sources and I am not finding any clues to why. Any ideas guys? Apr 08 16:36:01 That is, I'm not using neither do_bootimg or do_image_ext4 in my image recipe Apr 08 16:37:18 does your image set an image type? Apr 08 16:40:46 rburton: ah. yes IMAGE_FSTYPES="wic.vmdk tar.xz". When I added ext4 to the list, then it proceeds. Apparently metal-intel requres must have ext4 as a target Apr 08 16:40:58 *meta-intel :D Apr 08 16:57:43 rburton: right now python3 shows no test success, no fail and no skipped Apr 08 16:57:53 rburton: i.e. zero counts everywhere Apr 08 16:57:59 urgh Apr 08 17:08:10 RP: do you recall if qemuarm (qemuarma15) for 5.0 booted in nographic mode ? Apr 08 17:08:31 I’m testing tiny and seeing a hang. i just started a poky build, but it’ll take a while Apr 08 18:38:51 Has there been any efforts in Yocto/OE towards building images for containerized deployment, e.g. docker? I believe the images would be somewhat different. E.g. no need for any kernel or systemed, yet some low-level system things must be deployed, such as libc. Apr 08 18:40:19 Yes Apr 08 18:41:27 https://www.youtube.com/watch?v=OSyLoHYxGLQ&feature=youtu.be Apr 08 18:41:40 https://elinux.org/images/6/62/Building-Container-Images-with-OpenEmbedded-and-the-Yocto-Project-Scott-Murray-Konsulko-Group-1.pdf Apr 08 18:42:57 Crofton: cool, thanks Apr 08 18:48:59 sveinse: there's a container image type that rips out kernels and stuff as a generic starter Apr 08 18:49:39 sveinse: integration per-service is more specific, patches welcome. also try meta-virtualisation? Apr 08 18:50:06 bottom, OpenEmbedded, it isn't just for embedded anymore :) Apr 08 18:51:45 rburton: Yes. We have two setups, one where we are running docker server on embedded hardware, so meta-virtualization is used there. The other, which I'm experimenting with, is to build and run images for this purpose. Apr 08 18:53:11 At the same time, we want to be able to deploy the same SW into cloud computing. Since we have the ecosystem up and running with OE, it would be absolutely best if images could be build to cloud as well. Saves us from needing to make a build system for it. And OE seems very promising Apr 08 18:54:09 Yes, people already do this Apr 08 18:54:10 I'm building the image for intel-corei7-64 now to see if that is good generic platform for that. Takes a while on my sluggish machine :D Apr 08 18:55:27 Not sure yet clear if I need a VM or a container, so I'm testing both approaches Apr 08 18:58:32 Not related to this specific task, but management has asked me to assess if building our OE images is feasable using cloud services as opposed to purchase and maintain local metal. Then I will be placing yocto inside a docker container. Apr 08 18:59:20 So everything everywhere is cloud and containers these days Apr 08 19:09:50 is there an easy way to set the "-native" and "-nativesdk" variations of variables to the "base" version in a python function, I'm trying to use a single assignment that calls a function to set all three without copy-pasting the assignments...the env dump shows the override, but uses the default value Apr 08 19:25:55 I'm having the same problems booting any .wic or .wic.vmdk images with runqemu, even under the intel-corei7-64 machine. It does work using the ext4 image. I am able to start the .wic.vmdk image in virtualbox only if I enable EFI. Yet I cannot see any secureboot or efi packages in the manifest. Apr 08 19:32:13 guys, I need to have a very simple receipe, for my own simple &small image, with only static files (nothing to build/compile) and maybe untar some archives inside rootfs. what receipe to use as best example ? is there any doc with description ? Apr 08 19:40:00 zeddii: I don't think I've tested that, only in graphic mode Apr 08 19:40:38 zeddii: FWIW these are the tiny tests we run, I think its only qemux86: https://autobuilder.yoctoproject.org/typhoon/#/builders/15/builds/711 Apr 08 19:46:05 sveinse: grub-efi is the bootloader Apr 08 19:46:09 or, systemd-boot Apr 08 19:47:37 RP: ok, cool. I can confirm that qemux86 works. qemuarm has an issue. I'll poke at arm a bit, but prepare a patch regardless. Apr 08 19:49:15 zeddii: sounds good! I'd like to sort qemuarm for tiny and can add testing for that but that can be 2.8 Apr 08 19:49:41 zeddii: is non-tiny nographic also having issues? Apr 08 19:49:49 nope. it worked. Apr 08 19:50:01 rburton: interestingly, manifest lists neither (-- this is on rocko) Apr 08 19:50:03 poky -> qemuarm -> 5.0 -> nographic was fine. Apr 08 19:50:06 zeddii: I wonder if its related to that edid stuff Apr 08 19:50:43 either that or a kernel config. I'll try a couple of things and see if I can find a smoking gun. but will definitely send a patch later tonight, so you'll have it in your morning. Apr 08 19:51:29 zeddii: thanks. Things are starting to come together for 2.7 rc1 :) Apr 08 19:51:39 apart from the string of AB build failures :/ Apr 08 19:51:39 I'm more hampereded by not being able to build a kernel less image with IMAGE_FSTYPES="container" and that bitbake faults at parsing since there is no "ext4" target Apr 08 19:52:05 sveinse: welcome to a quirk: image manifest doesn't found stuff that isn't in a image, and the boot partition with a wic isn't an image Apr 08 19:52:33 rburton: ah, right, makes kinda sense. Thanks. Apr 08 19:53:38 rburton: is there a downside if I swap py3 back to -v instead of -W for run-ptest? Apr 08 19:58:00 In meta-intel/documentation/secureboot/README it states that IMAGE_FEATURES += " secureboot" should be used for that. Sorry if this is a stupid question but is secureboot something else than UEFI? Because my image requires EFI set in the hypervisor apparently, but I have not configured secureboot. Apr 08 20:01:57 sveinse: secureboot is kind of "sub function" of UEFI, you can have UEFI but not enable secureboot. secureboot itself will allow you to boot only signed binary. Apr 08 20:03:14 voyo: thanks Apr 08 20:06:09 np. however I havwe no idea how it is implemented within OE, and how to provide certs to check signature of your binary. Apr 08 20:07:19 voyo: no worries. My concern is more that it seems my VM (Azure) provider does not support for gpt or uefi or SCSI (need IDE) :( Apr 08 20:08:33 very simple booting option only then... Apr 08 20:09:27 voyo: yet, browsning through meta-intel, it doesn't seem to support non-gpt and uefi without manual tweaking Apr 08 20:10:03 Understandable. Gpt and uefi has been with us for a while. Only ancient hardware doesn't support it Apr 08 20:11:26 So its Azure Cloud which is the surprise here Apr 08 20:16:03 well. for simple images, virtualization stuff, etc. - personally - I wouldnt need and rather not want to have such complex things like uefi , its just overkill if you simply need to boot small image. For embeded devices you will have to use uboot or similar stuff anyway... Apr 08 20:17:29 by assumption you are working under very limited and controlled environment, so why to worry for uefi and makes life even more miserable ;) Apr 08 20:18:19 voyo: Well, I don't really want it. At least not for the VM part. But it seems integrated into the image Apr 08 20:19:27 seebs: around? Apr 08 20:20:35 you mean UEFI is built in your image, even you dont asked for it ? I see.. Im just starting with OE/yocto, have only small experience with images build by somebody else, using quite old version, with just old grub on board. Apr 08 20:20:39 seebs: I have some data on our mysterious owner corruption... Apr 08 20:20:47 seebs: I don't understand it though! Apr 08 20:24:14 voyo: correct. I pull in meta-intel, set MACHINE="intel-corei7-64" and then I get uefi when I build wic based image Apr 08 20:25:09 sveinse: what if you choose other MACHINE , maybe something like generic_x64 make a difference ? Apr 08 20:26:21 btw, I need to have a very simple receipe, for my own simple &small image, with only static files (nothing to build/compile) and maybe untar some archives inside rootfs. what receipe to use as best example ? is there any doc with description ? Apr 08 20:28:41 voyo: hmm on rocko it sais: "MACHINE=generic_x64 is invalid." Apr 08 20:29:11 RP: semi-around, what's up? Apr 08 20:29:45 sveinse: can't you just throw a ext4 at the cloud provider thingy? Apr 08 20:30:04 seebs: otavio has a reproducer for our weird pseudo owner corruption and some logs Apr 08 20:30:26 seebs: I was hoping you could glance at them and see if its anything obvious to you Apr 08 20:30:47 seebs: A cutdown version: https://paste.debian.net/1076734/ Apr 08 20:31:07 seebs: basically that file should be owned by 0 but becomes owned by 1000 Apr 08 20:31:38 seebs: the "symlink mismatch" being the thing which seems to be a smoking gun Apr 08 20:31:44 sveinse: genericx86-64 Apr 08 20:32:02 rburton: no, not when running a full VM. Even the wic is a intermediate format, as it must be converted to the appropriate hypervisor format Apr 08 20:32:11 seebs: I can give a link to the full logs and bitbake output but I think that is perhaps the most obvious filtered version Apr 08 20:33:17 seebs: I guess its saying the inode number in the database and the inode number on disk are different hence the "inode mismatch" Apr 08 20:35:31 seebs: even then the later 1000 owner which suddenly appears makes no sense Apr 08 20:35:51 huh Apr 08 20:36:32 this is interesting. Apr 08 20:36:48 What is the difference from using genericx86-64 vs intel-corei7-64? Is is only cpu options and uefi/gpt booting? Apr 08 20:36:54 there's no "linking [...] for" messages. but every call to pdb_link_file should be preceeded by one, I think? Apr 08 20:37:04 pseudo_debug(PDBGF_DB, "linking %s for %s\n", Apr 08 20:37:04 msg->pathlen ? msg->path : "no path", Apr 08 20:37:04 pseudo_op_name(msg->op)); Apr 08 20:37:04 pdb_link_file(msg); Apr 08 20:37:36 seebs: This was a filtered log, not sure if that got stripped out Apr 08 20:49:14 sveinse: did you saw this? - https://wiki.yoctoproject.org/wiki/How_do_I#Q:_How_do_I_build_a_wic_image.3F Apr 08 20:50:14 voyo: yeah sure. Can't build a bootable image without wic. Apr 08 20:54:28 it shouldn't have, i think, since the path should have matched. i think. Apr 08 20:54:36 and the unfiltered log seems not to have them either, which is weird. Apr 08 20:54:56 oh, i see. those lines are under PDBGF_DB, but only PDBGF_FILE is in use. Apr 08 20:55:12 and they probably shouldn't be restricted like that, but i'm honestly not sure. Apr 08 21:05:06 sveinse: .wic is just a disk image format, its not a special format. vmdk etc is just optimised for vm use. Apr 08 21:05:27 sveinse: compiler tune and kernel (generic vs intel) Apr 08 21:06:11 rburton: yes, thanks. I know what wic is Apr 08 21:06:42 rburton: is it correct that uefi and gpt cannot be turned off with an option in meta-intel? Apr 08 21:09:46 its the wic image that is controling that iirc Apr 08 21:10:56 which is controllable just set WKS_FILE Apr 08 21:39:20 * sveinse is starting to feel like a bitcoin miner with the amount of heat generated from compiling for 3 new machines, each 10k OE tasks from scratch Apr 08 23:11:37 New news from stackoverflow: Why is my kernel configuration option not set in resulting defconfig after running bitbake -c savedefconfig virtual/kernel? Apr 09 02:12:07 New news from stackoverflow: Yocto Recipe Not Installing File to Image Apr 09 02:45:22 RP: is there a good reason why you used "eval sh -c ..." on run-postinsts (besides that you wrote that in 2007) instead of something like $(sh -c ...)? Apr 09 02:45:40 RP: believe it or not, 12 years later thats causing me a headache **** ENDING LOGGING AT Tue Apr 09 03:00:03 2019