**** BEGIN LOGGING AT Thu Nov 12 02:59:56 2020 Nov 12 08:00:50 yo dudX Nov 12 08:16:30 yo Nov 12 09:59:11 qschulz: erbo: thanks for helping me out yesterday. I solved the issue. `machine-id` file was created by https://github.com/openembedded/openembedded-core/blob/master/meta/recipes-core/systemd/systemd-systemctl/systemctl#L277 so I patched the file with patchdir=${WORKDIR}. Works like a charm:) Nov 12 10:02:29 koty0f: \o/ Nov 12 12:06:07 I'm trying to understand how selftests are run: after a build then a failure on first run because of missing package on build host, it refuses to run again because build-st already exists (!?), and if I follow the hint given on IRC of using "-j1" I have the surprise of seeing a new build-st-$PID appearing and everything rebuilding from scratch... Nov 12 12:06:34 isn't there some more reasonable solution, really ? Nov 12 12:10:01 Well, maybe the biggest problem comes from SSTATE_DIR being under TOPDIR - wouldn't it be reasonable to notify the poor soul between chair and keyboard that such a sstate wil just be a problem ? Nov 12 12:49:50 yann: I guess there are some assumptions in the system about having a "sane" sstate setup by the time you run the selftests Nov 12 12:50:12 yann: TOPDIR isn't wrong as such, its just not optimal for selftest Nov 12 12:50:35 previously we did just reuse the build directory but that wasn't deterministic for the tests so we can't win Nov 12 12:53:36 @RP I am playing around with dunfell and your SPDX patch. I thought it checks per package, but I see things like this: glibc-2.31+gitAUTOINC+6fdf971c9d-r0 do_package: License for package glibc is {'GPL-2.0 WITH Linux-syscall-note'} vs GPLv2 & LGPLv2.1. GPL with out the L comes from glibc tests. Nov 12 12:54:45 RobertBerger: it sounds like its looking at the main license rather than the package license? That is possible, I've not looked for a while Nov 12 12:56:58 @RP OK - I might have look at it then, as time permits. In general our licensing tooling seems to analyze whatever is built and not e.g. what ends up in the target, meaning too many "false positives". Nov 12 12:57:31 @RP So I guess, what people usually do, is post processing of the info. Nov 12 13:08:39 I think I have a fix for ipk package/feed signing with uses gpg-native, but I am not sure why it uses gpg-native instead of the gpg of my hosts Linux ;) Nov 12 13:08:43 RobertBerger: right, yes. This was purely an experiment to compare the data from the sources with data from the license field Nov 12 13:09:27 RobertBerger: because we don't like dependencies on the host system? I know gpg is a nightmare in the context of the autobuilder and testing Nov 12 13:10:53 @RP: SPDX: Sure, just looking into how it works and maybe to improve it. It's very interesting. But, as you know, only a few source files use SPDX at all ;) Nov 12 13:11:23 RobertBerger: the kernel isn't "few" and its massively increasing Nov 12 13:12:11 @RP: SPDX: Yes I know, but it's GPLv2 ;) and one package Nov 12 13:12:29 RP: "not optimal" is an understatement :D Nov 12 13:13:50 yann: I agree it needs improvement and we would take patches which improve things without breaking our testing or getting in users way Nov 12 13:13:51 @RP: gpg: The latest version on Ubuntu 18 can not deal with "parallel" invocation. It runs out of "secure" memory. gpg-native can deal with it. Nov 12 13:14:09 RobertBerger: hence why we have gpg-native ;-) Nov 12 13:14:30 @RP: Yes I know, but it's not used by default ;) Nov 12 13:14:48 RobertBerger: hmm, it should be? :/ Nov 12 13:14:56 @RP: only for rpms: http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/sign_rpm.bbclass#n69 Nov 12 13:15:26 @RP: I added the last few lines from above to http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/sign_ipk.bbclass#n69 Nov 12 13:15:32 and this seems to fix it ;) Nov 12 13:15:43 RobertBerger: sounds like we need patches and new tests Nov 12 13:15:57 ipk signing is broken in many ways. Nov 12 13:16:12 RobertBerger: hence the need for tests! Nov 12 13:16:28 It's tests and doc. Nov 12 13:16:48 I amused myself a couple of days now to have something, which, I think, works. Nov 12 13:17:31 Let me get it in "some" state and I'll show it. Nov 12 13:18:12 RobertBerger: sounds good, its something we do want to be able to support Nov 12 13:18:54 As far as I can say it's not too bad patch-wise: just those few lines added, but inconsistencies e.g. defaults in the signing classes and opk. opkg default to binary signatures and classes to ASCII. Nov 12 13:19:29 And also some bbappends for opk and opk-keyring Nov 12 13:20:35 Let me test it and I'll write up something (including the gpg key stuff) and then we can see how to push this further. Nov 12 13:21:49 @RP: i am pretty confident we can support it. I tried it with ipk and not rpms, but I think it's very similar there. rpms might be better supported. Nov 12 13:22:37 RobertBerger: I'm fairly sure there are rpm tests Nov 12 13:22:41 not sure about the situation with keys and rpms, but it should be relatively easy to check this as well. Nov 12 13:23:35 One major issues I had was, that key on the target was not "trusted" and I added a new function to "opkg-key" to make it work. Nov 12 13:23:49 RP: is it expected that between 2 runs sharing the same sstate, quite some build tasks need to be rerun ? Nov 12 13:24:05 (oe-selftest runs) Nov 12 13:24:43 yann: depends how you're sharing sstate Nov 12 13:24:50 yann: also depends upon the tests Nov 12 13:25:17 :/ Nov 12 13:26:31 yann: some tests have to build from scratch and are designed not to pull from or push back into sstate due to what is being tested Nov 12 13:27:01 yann: sorry, is "runs" here selftest runs or normal builds? Nov 12 13:27:40 I just adjusted SSTATE_DIR not to use TOPDIR but an absolute path, so most artifacts are indeed reused from initial build, but my first run failed because sudo wants a password (and I'm not going to change that), so I ran sudo to unlock it thinking it would just have to run the tests now, just not apparently :| Nov 12 13:27:53 selftest runs only Nov 12 13:28:17 specifically, "oe-selftest -r runtime_test.SystemTap.test_crosstap_helloworld -j 1" Nov 12 13:29:08 yann: the only sudo thing needed is for the tap/tun device. You can preload those in advance, then selftest won't need it Nov 12 13:29:57 I'd expect that systemtap tests should build mostly from sstate but it will probably include a kernel build since the stap target pieces won't be written to sstate Nov 12 13:31:59 that could be reasonable, but util-linux, sqlite3, libxcb and such are quite unexpected here Nov 12 13:32:28 systemtap with a valid SSTATE will not work, since there will be no kernel sources available to build against. Nov 12 13:32:43 yann: are you using hashequiv? Nov 12 13:32:58 no Nov 12 13:33:28 yann: ok, that rules out that idea. I agree those things should be being reused and would like to understand why they're not too :/ Nov 12 13:35:36 at least the test passed Nov 12 13:47:55 wow, it does indeed rebuild the kernel for each individual systemtap test, even in a single run :/ Nov 12 13:58:38 @yann: I am not sure it needs to rebuilt the kernel every time, but it needs the kernel sources and to build a kernel module against it from the .stp script. Nov 12 14:00:47 @yann So, I guess, we have a very special case with systemtap. If you would use SSTATE or an SSTATE mirror the kernel sources would not be available and it could not build against them. Nov 12 14:01:41 @yann not sure how this is handled in the tests. --no-setscene I guess, but then again, it builds a lot of stuff. Nov 12 14:01:41 yann: hmm, that doesn't sound good :/ Nov 12 14:03:32 yann: there is definitely a ton of optimisation which could be done in oe-selftest Nov 12 16:24:47 I don't really see how the qemu's IP address gets passed to the tests themselves - esp. how "crosstap -r root@192.168.7.2 ..." gets to reach qemu (and the fact it requires 30min rebuild each time I attempt to launch this is quickly going to scare me away) Nov 12 16:28:05 @yann: If you run qemu manually those are the addresses it usually gets. Can you try to run qemu manually somehow and try if those addresses 7.1 and 7.1 are valid? Nov 12 16:28:17 7.1 and 7.2 Nov 12 16:34:28 RobertBerger: yes it does get that when running "runqemu", however that still blocks on sudo even with tun and tap kmodules loaded Nov 12 16:37:10 yann: shouldnt your user be in some group which can configure the tun/tap OR dont you need an entry in sudoers ? Nov 12 16:37:20 khem: can you pick 'freerdp: Add missing libxkbcommon WL dependency' for meta-openembedded please ? Nov 12 16:40:03 is it on ml ? if not send it Nov 12 16:40:04 marex: requiring a group would seem reasonable, but the script really seems to like sudo :| Nov 12 16:40:17 khem: it is Nov 12 16:40:23 there is runqemu-gen-tapdevs too, but that's quite intrusive on the system, too Nov 12 16:40:30 yann: have you run the runqemu-gentap-devs command? that uid:gid should be the user/group that will actually be running qemu/testimage Nov 12 16:40:39 ah... Nov 12 16:41:02 khem: https://patchwork.openembedded.org/patch/175616/ Nov 12 16:41:17 I've been working on a systemd solution to have tap devices created at startup, but it hasn't been smooth so far Nov 12 16:41:23 that one seems really suitable for a container that will run tests, but not for a workstation Nov 12 16:41:23 * moto-timo not a networking expert Nov 12 16:42:12 yann: I have always used the runqemu-gen-tapdevs and I create (if it doesn't exist) a netdev group Nov 12 16:42:23 so my user and the CI system are both members of netdev Nov 12 16:42:55 but I am still in search of a cleaner solution with systemd Nov 12 16:44:07 IIRC I was using something for qemu networking that did not have such root perms requirement, in a previous life (and time flies, I'll have to re-dig from scratch) Nov 12 16:44:21 slirp, but that is user space and slower Nov 12 16:44:36 I won't intentionally use any tool that is slower Nov 12 16:45:24 marex: its already in master did you check ? Nov 12 16:45:35 also, I won't intentionally run things that require root perms... you get only the perms you need Nov 12 16:46:00 running containers as --privileged. NO. Nov 12 16:46:56 no that was not slirp. things come gradually back into main memory as I browse the docs Nov 12 16:48:45 khem: d'oh, I was still rebasing on dunfell ... so, I guess i should send another patch for dunfell too Nov 12 16:48:54 khem: thanks for the heads up Nov 12 16:50:24 I was using the socket backend. The use-case was different, essentially inter-VM communiactions. I had a specific qemu process (with processor not running, kinda kludgy) acting solely as a network switch (with the equivalent of "-netdev socket,id=mynet0,listen=:1234" passed by the controlling process to create ports on the switch, dynamically), and the VM's connecting with "-netdev socket,id=mynet0,connect=:1234" Nov 12 16:51:05 yann: interesting Nov 12 16:51:20 and I probably had a special port on that virtual switch that was bridged to a permanent tap device Nov 12 16:51:38 that would make sense... Nov 12 16:51:51 that way I only had a single-shot config as root, and everything else unpriviledged Nov 12 16:52:01 there are many things in runqemu and oe-selftest that made sense at the time but could use some updates... Nov 12 16:52:41 or "there I fixed it" solutions where it worked and nobody ever went back to re-factor it Nov 12 16:53:03 plus teams have changed... so continuity of devs is a factor Nov 12 16:53:12 sure :) Nov 12 16:53:51 and the tap solution is the one advertized everywhere, so usually people take the road with signs :) Nov 12 16:54:01 indeed Nov 12 16:55:34 to be honest, I frequently take the command that 'runqemu' outputs and tweak it manually, but the simpler starting point is still welcome :) Nov 12 16:59:14 the tap solution works out really well on the automated testing which helps a bit too Nov 12 18:17:56 khem: do you want me to submit it or can you pick it for 3.2 and 3.1 ? Nov 12 18:19:46 I just realized I'm procrastinating by watching a ted talk on procrastination. Nov 12 18:21:59 I can only hope it's the Sam Battle one Nov 12 18:24:50 it is now Nov 12 18:24:52 :) Nov 12 20:10:16 kergoth: is it the one by Tim Urban ? its so funny Nov 12 20:10:30 khem: yeah. instant gratitification monkey <3 Nov 12 20:10:44 his blog post on the subject is good too. actually a lot of his posts are on wait but why Nov 12 20:11:52 yeah, but I think procrastination is also helpful, because it does not end the thread but puts it to sleep, so subconscious is still processing it :) Nov 12 20:13:01 true, i think it depends on whether it's preventing you from taking action at all, or if it's a stepping away from it for a time Nov 12 20:13:18 multithreaded concurrent systems are efficient Nov 12 20:13:34 only problem is concurrency should not be left to everybody Nov 12 20:14:19 sometimes its helping you to take action that you are not yet ready for Nov 12 20:15:23 which is to 'defer' Nov 12 21:33:23 how does file-rdeps QA check work with virtual recipes? a recipe we have is failing the file-rdeps QA check, but we do have a provider for the required library in a virtual DEPENDS Nov 12 21:54:05 I have a (hopefully) easy question, I found online bitbake --interactive, which doesn't seem to exist in my setup. Was this feature moved/removed? I'd like to for example be able to run "bitbake -f -c compile package_a" followed by "bitbake package_b" without reloading all the recipes every time Nov 12 22:51:10 it looks to me like the file-rdeps QA check ignores PROVIDES and PREFERRED_PROVIDER. is this intentional or a bug? Nov 12 23:27:55 Answer to my question was "bitbake --server-only -T -1" in case anyone curious reads the scrollback **** ENDING LOGGING AT Fri Nov 13 02:59:57 2020