**** BEGIN LOGGING AT Mon Nov 04 02:59:58 2019 Nov 04 07:33:32 Morning Nov 04 07:59:48 Morning! Nov 04 10:47:51 novaldex: Seems issue tracker is back :) Nov 04 11:04:31 Tofe: kinda late pong Nov 04 11:31:58 JaMa: hi Nov 04 11:32:48 Could you revert the tags in meta-wop for 055 and replace with 057? I remember last week you said you'd do that, but I didn't see it. I'm not 100% sure how to do that Nov 04 11:33:01 I can create new one, but not sure how to delete the old one Nov 04 11:34:04 I've also been trying to build Chromium 68 and 72 in our image but no luck Nov 04 11:34:31 Tried the various patches that were in your shr branch of OSE but not much luck Nov 04 11:53:35 Herrie: I've updated the release tags in webos-ports-setup already, I wasn't planing to change 055 (these are created by jenkins jobs or at least should be) Nov 04 11:54:07 Herrie: I've updated the webruntime changes in shr-project fork, but there is still 1 failure with the glibc in torque Nov 04 11:54:29 I've pinged LGE guys to review my changes and integrate them Nov 04 11:56:36 JaMa: thnx this is for 68 or 72 or both? Nov 04 11:59:39 both, 68 builds fine, 72 fails with torque Nov 04 12:00:03 68 also failed at my end on Friday Nov 04 12:01:00 what was the error? Nov 04 12:01:11 bshah: no pb :) I'm still working on my 5.2-based hammerhead kernel, and I was wondering if you currently still have sound working on the N5 Nov 04 12:01:28 bshah: because if I understood correctly, you now used masneyb's kernel right? Nov 04 12:01:37 JaMa: Let me try with your latest bits Nov 04 12:01:43 Just to make sure it wasn't solved Nov 04 12:03:23 bshah: on my side I've made some (tiny) progress, fixing little stuff here and there, but still no sound. I have the pcm sockets, though. (it's here: https://github.com/Tofee/linux-mainline-msm/tree/v5.2-hammerhead) Nov 04 12:04:22 Herrie: I was testing it with OSE with zeus, so your error in LuneOS with zeus might be different Nov 04 12:06:45 JaMa: I'm just re-running to make sure I have the latest Nov 04 12:08:26 What error are you getting on 72? Just interesting to know Nov 04 12:13:55 JaMa: Seems to start with: https://paste.ubuntu.com/p/SNp7wttbVn/ Nov 04 12:14:42 This is with 68 Nov 04 12:18:18 Retrying 72 now as well Nov 04 12:18:23 this is probably caused by stdlib on host, let me re-read my notes Nov 04 12:22:41 This is on Ubuntu 18.04 at my end Nov 04 12:30:29 Herrie: "./torque: /usr/lib/i386-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.26' not found (required by ./torque) Nov 04 12:30:36 is the remaining issue with 72 Nov 04 12:31:32 your 68 issue is probably caused by missing libstdc+08-dev and lib32stdc+-8-dev on your host (or for other version autodetected by clang) Nov 04 12:32:08 if you run clang you can see "Selected GCC instalation: /usr/lib/gcc/x86_64-linux-gnu/8" somewhere (or some other version) Nov 04 12:32:20 and for this one you need corresponding -dev packages installed Nov 04 12:32:44 "Command 'clang' not found, but can be installed with: sudo apt install clang" Nov 04 12:32:59 Herrie: clang is built as part of webruntime Nov 04 12:33:09 git/src/third_party/llvm-build/Release+Asserts/bin/clang++ Nov 04 12:33:15 Ah OK Nov 04 12:33:39 So how I run this clang then? Nov 04 12:33:46 Or it should be somewhere in my do_compile log? Nov 04 12:34:46 just run the command line from your error log with -E and -v added (after sourcing the temp/run.do_compile environment Nov 04 12:35:31 JaMa: Not sure what you mean by that to be honest Nov 04 12:36:26 show me your "dpkg -S /usr/lib/gcc/x86_64-linux-gnu/8" Nov 04 12:37:30 "gcc-8-base:amd64: /usr/lib/gcc/x86_64-linux-gnu/8" Nov 04 12:37:50 what other versions do you have there? Nov 04 12:38:27 7, 7.4.0 and 8 it seems Nov 04 12:38:38 At least that's what's in my /usr/lib/gcc/x86_64-linux-gnu/ folder Nov 04 12:38:57 and "dpkg -l lib\*stdc+\*" Nov 04 12:39:29 dpkg -S /usr/lib/gcc/x86_64-linux-gnu/7 Nov 04 12:39:37 https://bpaste.net/show/GJNP6 Nov 04 12:40:37 it's a bit shortened but looks like correct -dev packages are installed, hmm :/ Nov 04 12:41:04 hmm right, try to build for qemux86 Nov 04 12:41:22 I wasn't testing it with qemux86-64 as OSE doesn't support that Nov 04 12:44:40 Ah OK, will try qemux86 instead then Nov 04 12:46:38 On 72 for qemux86-64 I get: https://bpaste.net/show/K2XNA Nov 04 12:47:44 Need to build a bunch of other things fro qemu86 first before it starts with webruntime Nov 04 12:50:26 I've started qemux86-64 build as well to see if I can reproduce it Nov 04 12:50:54 but I woudn't be surprised if there are hardcoded qemux86 MACHINE strings in the webruntime itself Nov 04 12:52:04 JaMa: In the LGE added bits you mean? Nov 04 12:52:16 yes Nov 04 12:52:31 I would expect that the original Chromium is pretty OK with different archs Nov 04 12:52:41 e.g. wam still hardcodes list of supported MACHINEs Nov 04 12:53:08 Herrie: yes, upstream doesn't have this issue as qtwebengine shows Nov 04 12:53:24 that's why I'm a bit concerned about switching from qtwebengine to webruntime from LG Nov 04 12:53:43 qemuarm fails in OSE as well because hardcoded MACHINEs Nov 04 12:54:03 AH ok Nov 04 12:54:10 Weird they limit it that way Nov 04 12:54:14 review with a fix is ignored for a year, because LG doesn't officially support qemuarm (so nobody even cares to review and merge a fix from me) Nov 04 12:56:44 * JaMa looking into qt5-qpa-hwcomposer-plugin failure in unstable builds with Qt 5.13 Nov 04 13:09:29 JaMa: This is what I get for 72 for qemux86: https://bpaste.net/show/SRTMI Nov 04 13:09:33 Seems to be the same as you have Nov 04 13:11:16 yes that's the same issue, building native tools against wrong glibc Nov 04 13:11:45 the fix from 68 doesn't work anymore with 72 (at least not completely, IIRC it fix for native mksnapshot, but not for torque) Nov 04 13:27:50 JaMa: Not sure to what extend the meta-lgsvl-browser repo is related to webruntime if at all. They have some "fix" for Torque there it seems? https://github.com/lgsvl/meta-lgsvl-browser/commit/fc83b84d7364174d784721f803239bf2f906951e#diff-3c579564ddf873e4cf3a786634b69f34 Nov 04 13:28:02 Specifically the 0001-Wrap-mksnapshot-and-torque-calls-on-Yocto-building-w.patch Nov 04 13:35:02 running it inside qemu should help, eventually even got rid of multilib packages needed on host **** BEGIN LOGGING AT Mon Nov 04 14:33:13 2019 Nov 04 14:41:17 fix for qt5-qpa-hwcomposer-plugin pushed, next round of unstable builds with Qt 5.13 should build fine Nov 04 14:55:28 JaMa: Seems straightforward enough in the end :) Nov 04 14:57:35 what? Nov 04 15:07:22 Tofe: the latest revision of libqofono still fails sometimes, e.g. http://jenkins.nas-admin.org/job/LuneOS/view/unstable/job/luneos-unstable_raspberrypi2/lastBuild/console just now Nov 04 15:08:47 | make[3]: *** [tst_qofonoconnmancontext.moc] Bus error Nov 04 15:13:24 JaMa: That patch I mean ;) Nov 04 15:13:36 For qt5-qpa-hwcomposer-plugin ;) Nov 04 15:15:14 ah, yes it was simple, I thought you're talking about webruntime Nov 04 15:15:30 JaMa: No webruntime not ;) Nov 04 15:15:50 For libqofono we could disable the building of tests as workaround Nov 04 15:16:01 JaMa: we probably don't --- what he just said Nov 04 15:16:42 Herrie: I'm seeing the same 2 issues with qemux86-64 Nov 04 15:16:46 in OSE Nov 04 15:17:26 JaMa: OK good it's at least consistent Nov 04 15:18:03 There's no real way for it yet in the .pro it seems though Nov 04 15:18:36 Tofe: Herrie: not sure if moc fails only in tests, but yes, my guess is some race condition in some generated files as moc probably fails with bus error when it reaches unexpected EOF Nov 04 17:00:28 JaMa: Not sure, could also be differences in our Connman v.s. Mer's for example Nov 04 19:02:56 mmh on rosy, the flashing of the stable fails, but if we kill the recovery's tar, it then tries the one from our archive and that succeeds Nov 04 19:04:25 ^ should we do that by default ? i.e. first extract needed files somewhere, and then bootstrap our own tar Nov 04 19:05:00 Tofe: interesting, it used to be just my tissot suffering from this Nov 04 19:05:34 Tofe: doing it by default won't work, because we need the twrp tar to unpack at least enough of our image so that our busybox is included Nov 04 19:06:11 or we would need to package separate static tar build in the same package and use that Nov 04 19:06:46 JaMa: mmh is there a tar-static recipe ? :) Nov 04 19:07:19 I don't think so Nov 04 19:08:06 I think for openmoko long time ago we were just deploying static tar binary from metadata to deploy dir, which isn't nice at all Nov 04 19:08:40 JaMa: but if we do "tar --numeric-owner -xzf /data/webos-rootfs.tar.gz -C $tmp_extract-tar ./lib/ld-* ./bin/busybox.nosuid" as a first step, doesn't it have more chance of success ?... Nov 04 19:10:38 isn't that what we do corrently? if first tar fails, try our tar if it got unpacked enough? Nov 04 19:10:57 the problem is that if OOM killer kills something else than "tar", we're f*cked Nov 04 19:11:29 yes, but we don't have our tar/busybox until we unpack something Nov 04 19:12:08 we can add another small image with just tar/busybox and deps, unpack that with twrp tar, then use that temporary image to unpack webos-rootfs.tar.gz Nov 04 19:12:40 mmmh that should actually be pretty easy to do with OE Nov 04 19:13:03 or static build of tar or busybox included as separate file in the package.zip Nov 04 19:13:57 did you upgrade twrp recently? or do you know why it started to fail for you? Nov 04 19:14:25 my tissot always suffered from this while yours didn't afaik Nov 04 19:14:33 I tried various versions; I think it started to fail when we got over 600MB or so Nov 04 19:15:07 I think I was just lucky before, with rosy. But for tissot it's more weird, I never had this Nov 04 19:15:40 Couldn't it be a bug in the tar version somehow wrt memory that could be bypassed with some flag? Nov 04 19:16:03 It could completely be a bug in twrp's tar, yes Nov 04 19:16:03 in the end it might be better to build our own twrp with reliable busybox as well ;) Nov 04 19:16:56 ubports did their own recovery Nov 04 19:17:49 How hard is it to have a static busybox ? just RDEPENDS on busybox-static in a new image ? Nov 04 19:18:22 (or the existing one, fwiw) Nov 04 19:25:13 there doesn't seem to be any simple way to achieve that; so maybe we'd better go with the other approach (embed a separate little busybox.tar.gz image that we could use as a bootstrap) Nov 04 19:25:15 PN-static is only for static libraries (and usually empty), I don't think there is busybox-static package at all Nov 04 19:25:58 probably separate busybox-static recipe would be needed to provide it Nov 04 19:26:52 I saw an OE discussion from 2012 about doing exactly that, but it seems they never really made it Nov 04 19:27:14 or changing the default busybox to be statically linked and include that one (in both images) Nov 04 19:27:31 but that's get more complicated with busybox.suid and busybox.nosuid binaries Nov 04 19:28:16 in that case it would be tar_static recipe Nov 04 19:29:51 hmm, last SHR build I did was over 7 years go it seems, time to let go and not migrate it to new disk Nov 04 19:30:27 :) Nov 04 19:30:43 maybe it's time to move on, now :p Nov 04 19:33:14 git diff github/jama Nov 04 19:33:14 fatal: mmap failed: Cannot allocate memory Nov 04 19:33:17 this is new as well Nov 04 19:33:28 if we use external binary twrp for recovery anyway is't it simpler to just add one binary busybox for tar, like i did long time ago? Nov 04 19:34:33 but building it ourselves is better i agree Nov 04 19:34:42 maybe you should send a patch with it Nov 04 19:34:54 https://gitlab.com/nizovn/meta-smartphone/commit/36a75340ba1a65c8d3f533a9faa608fa73c24d07 Nov 04 19:36:15 in current LuneOS builds we don't use twrp-package, do we? Nov 04 19:37:07 there is no twrp* in meta-smartphone yet, so it's not just this 1 commit Nov 04 19:37:39 iirc, we provide twrp for tissot within our initrd Nov 04 19:38:09 but it's done another way I think Nov 04 19:39:45 JaMa: https://gitlab.com/nizovn/meta-smartphone/commit/b4f2c007dd3a13e262421c256097d42fa4e91f35 Nov 04 19:40:08 yes, thanks Nov 04 19:42:57 can someone send applicable and tested PR for me to merge if we agree to do it this way? Nov 04 19:44:26 i can prepare PR, but can't test due to hw problems Nov 04 19:44:32 nizovn: I can test Nov 04 19:44:57 ok Nov 04 19:45:01 I can test too here to some extend Nov 04 19:46:07 so do you think it's ok to move from luneos-dev-package recipe for building zip to separate twrp image type? Nov 04 19:46:12 also don't forget to update the OS_NAME not to be webos OSE Nov 04 19:47:36 JaMa: i planned to get it somewhere automatically :) , even found source, but forgot already Nov 04 19:47:41 and webos_deploy.sh should respect OS_NAME variable as well instead of os_name="webOS OSE" Nov 04 19:47:57 yup Nov 04 19:49:13 nizovn: not sure I understand your sentence about moving from luneos-dev-package; don't we need both twrp and the zip (with the deploy shell and the static busybox) Nov 04 19:49:15 ? Nov 04 19:49:46 the first part should go to https://github.com/webOS-ports/android-update-package/ PR anyway Nov 04 19:50:27 for tissot it's a bit special, as there is no recovery partition; but for the other, the flashed recovery it quite ok so far... or did I misunderstand something here Nov 04 19:52:21 Tofe: btw there is Qt 5.13 build for rosy already in unstable if you want to test something with it, but there is still that issue with removed devicePixelRatio API Nov 04 19:53:01 JaMa: well for the stable build it doesn't seem to boot, so I guess I'm not there yet :p Nov 04 19:54:46 JaMa: is it that easy ? ... https://www.mail-archive.com/openembedded-devel@lists.openembedded.org/msg13407.html Nov 04 19:56:28 there might be few more issues, because the recipe is quite different now, but yes this is the starting point Nov 04 19:57:20 and for package.zip you will need to deploy the binary in DEPLOY_DIR for package.zip to add it inside Nov 04 19:59:07 can't we DEPEND on busybox-static and fetch it from sysroot somehow ? too sketchy ? Nov 04 19:59:58 target binaris aren't normally installed in sysroots Nov 04 20:00:11 makes sense :) Nov 04 20:00:41 it's just a bit annoying to clutter the deploy directory Nov 04 20:01:24 you can extend sysroot_stage_all() task to stage it for busybox-sysroot, but that's probably more strange than deploying static tar in deploy dir Nov 04 20:01:46 ok, let's just deploy it then Nov 04 20:02:02 we will drop it from deploy dir when rsyncing to milla Nov 04 20:02:15 so it won't clutter our builds there Nov 04 20:06:09 Tofe: iirc we build luneos-dev-image, and then pack it into zip using luneos-dev-package Nov 04 20:06:28 separate twrp image type is just cosmetic to build only luneos-dev-image, it was added for consistency in webOS OSE where just webos-image-devel image is build for all targets, resulting in different output format automatically Nov 04 20:08:32 for qemu we build -emulator-appliance, so it won't be consistent unless we migrate those to IMAGE_FSTYPEs as well Nov 04 20:08:51 oh ok Nov 04 20:09:51 but it's true that because of this inconsistency I have bash alias to build right image based on MACHINE variable Nov 04 20:12:11 nizovn: ok I understand better ; maybe we can do that in two steps ? having a static busybox is quite local, whereas migrating image type will impact our build scripts Nov 04 20:12:55 right Nov 04 20:17:48 i've taken the static busybox from here: https://launchpad.net/ubuntu/trusty/arm64/busybox-static/1:1.21.0-1ubuntu1 Nov 04 20:17:52 do you have better idea? Nov 04 20:18:40 that should do for now; I'll try to sketch a busybox-static recipe Nov 04 20:18:57 ok Nov 04 21:06:41 first ugly proposal: https://bpaste.net/show/CQWAW Nov 04 21:07:07 (in meta-smartphone/meta-android/recipes-core/busybox/busybox-static_1.30.1.bb ) Nov 04 22:00:02 Tofe: use COREBASE in FILESEXTRAPATHS_prepend Nov 04 22:01:23 and it's a bit unfortunate as this recipe will be first in meta-smartphone making it really oe-core version specific, maybe this should go to meta-luneos instead especially if the busybox binary is integrated from there **** BEGIN LOGGING AT Tue Nov 05 00:31:22 2019 **** ENDING LOGGING AT Tue Nov 05 02:59:58 2019