**** BEGIN LOGGING AT Fri Feb 12 02:59:57 2021 Feb 12 06:20:26 Good morning everyone Feb 12 06:25:09 Yesterday I tried to set up my environment for fixing our kernel patches moving from Yocto 2.5->3.1. In 2.5 I used the devtool to work on the patches, but as I upgraded to 3.1 it said it was unable to find the defconfig, and I was advised to try the "traditional" approach as described in the manual. I'm stuck on the "yocto-kernel-cache" part, as I'm using the kernel from our SoC vendor, and they have no Feb 12 06:25:15 such repository. If I try skipping that step, devtool is unhappy. Is the kernel-cache stuff required? Feb 12 06:26:24 by devtool I mean bitbake in the second last sentence Feb 12 07:28:40 yo dudX Feb 12 07:31:10 LetoThe2nd: good morning Feb 12 09:28:07 Hello, I've got a quick question, because I can't find straightforward answer in docs: is it possible to tell gitsm fetcher to checkout to specific commit hash? As in tag, but using commit hash Feb 12 09:35:14 hi I'm JungleBoy Feb 12 09:37:21 kpo_: usually SRCREV does that, no idea if it applies to gitsm too. but thats the best guess. Feb 12 09:42:08 good morning Feb 12 09:42:53 was wondering if anyone has built yocto for a ryzen platform and has met with the AMD graphics driver Feb 12 09:44:29 intera91 which board? Feb 12 09:46:20 basically I have IMAGE_INSTALL_append = " mosquitto libavc1394 cairo zeromq imagemagick glfw libconfig libmosquitto1 ffmpeg strace gdb mosquitto-dev boost gstream Feb 12 09:46:20 er1.0-libav vulkan-cts vulkan-headers glm-dev glslang rinicom mesa mesa-demos" in my local.conf and trying to build a core-image-sato for a VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Raven Ridge [Radeon Vega Series / Radeon Vega Mobile Series] (rev c6) (prog-if 00 [VGA controller]) Feb 12 09:47:34 still no radeonsi gets built and glxinfo states: Feb 12 09:47:35 intera91: that sounds like a totally flawed approach. Feb 12 09:48:00 LetoThe2nd: am a newbie at that one Feb 12 09:48:20 intera91: 1) create a custom layer 2) create a custom image 3) make a somewhat sensible dependency chain that allows you to focus on one thing. Feb 12 09:48:37 LetoThe2nd: am all ears at what would be the proper way to proceed Feb 12 09:48:54 e.g., your image should INSTALL the application, and the application shall depend on the libs and whatever you need. Feb 12 09:49:01 4) get sato out of our head Feb 12 09:50:09 feels like being so close yet so far Feb 12 09:50:35 5) accept the possibility that there is no real blueprint for the amd gpu setup you want and that you probably have to invest quite a bit of effort. Feb 12 09:51:12 mmh I can see that, that's unfortunate Feb 12 09:51:26 here is the problem: glxinfo | grep -i vendor Feb 12 09:51:27 libGL error: MESA-LOADER: failed to open radeonsi (search paths /usr/lib/dri) Feb 12 09:51:28 libGL error: failed to load driver: radeonsi Feb 12 09:51:28 libGL error: MESA-LOADER: failed to open radeonsi (search paths /usr/lib/dri) Feb 12 09:51:28 libGL error: failed to load driver: radeonsi Feb 12 09:51:29 server glx vendor string: SGI Feb 12 09:51:29 client glx vendor string: Mesa Project and SGI Feb 12 09:51:30     Vendor: VMware, Inc. (0xffffffff) Feb 12 09:51:30 OpenGL vendor string: VMware, Inc. Feb 12 09:51:43 intera91: use a pastebin next time, please. Feb 12 09:51:55 sure sorry Feb 12 09:52:13 intera91: have you enabled the correct DISTRO_FEATURES and MACHINE_FEATURES for gpu/opengl support? Feb 12 09:52:18 @LetoThe2nd thanks, that's the thing :) Feb 12 09:52:50 qschulz: not sure how to do that Feb 12 09:53:07 is that in local.conf? Feb 12 09:53:12 intera91: probably you don't have a proper provider for virtual/gl or whatever it takes there. i'm no graphics guy, sorry, but really - stuffing everything into local.conf is a super bad practise and you should get rid of it before it forms a habit! Feb 12 09:53:46 intera91: i suggest watching the videos on creating a layer, a custom image, and the distro/machine/image one :) Feb 12 09:54:51 intera91: https://docs.yoctoproject.org/ref-manual/features.html#ref-features-distro Feb 12 10:06:52 LetoThe2nd was searching for that video as well, do you know which # it was? Feb 12 10:07:26 distro/images/machine? not sure, 7, 8 or so. Feb 12 10:07:40 ok will check it out, thanks Feb 12 10:18:38 JungleBoy: it's in the title of the Youtube video IIRC Feb 12 10:28:18 hello, all I have 2 metas and both need a bbappend on device-tree part. if i put one inside meta-custom-bsp an one inside meta-custom-apps. Can I build an image from meta-custom-bsp without use the bbappend inside meta-custom-apps ? I'm not sure to be clear... sorry for that :) Feb 12 10:30:33 NiniC0c0: no Feb 12 10:30:59 or maybe with dynamic layers but I've no experience with that Feb 12 10:31:58 NiniC0c0: device tree is related to hw configuration, so it is related to your machine configuration file. You should therefore have two machines or two device trees at least. Feb 12 10:32:16 in case of two machines, you can have SRC_URI_append_ in your bbappends Feb 12 10:32:53 qschulz as always xD I have 2 meta folders (bsp and apps) with a device-tree_%.bbapend. When I build a bsp image i don't want to use the bbapend from apps folder. maybe BBFILE_PRIORITY is enough ? Feb 12 10:33:12 NiniC0c0: no, bbappend applies no matter what Feb 12 10:33:35 the priority will just change the order in which they are applied Feb 12 10:33:36 qschulz Ok. it's the same machine i need just to add a node inside Feb 12 10:33:50 NiniC0c0: what is this node for? Feb 12 10:33:53 ho ok thx for the info Feb 12 10:34:28 qschulz just to configure custom driver input Feb 12 10:36:53 NiniC0c0: you could always use device tree overlays Feb 12 10:37:06 NiniC0c0: tip: just because it is physically the same machine, it doesn't necessarily have to be the same machine configuration. or dt overlays. it "depends" Feb 12 10:37:18 apply the overlay in u-boot depending on a conf file that you'll set in your image recipe Feb 12 10:38:16 qschulz LetoThe2nd I'm stupid overlays should do the trick:)  thank you for supporting my stupidity.. Feb 12 10:39:11 NiniC0c0: you can always thank us with free beers! Feb 12 10:39:16 NiniC0c0: it's not stupidity, sometimes we just forget about things. Don't worry, happy to have helped, that's hthe most important part :) Feb 12 10:39:41 it's a really good community guys<3 Feb 12 10:56:48 hello. What could be a good practice for I ran out of free space: should I 'clean' some temporary folder before I 'rsync' whole ~/poky to another drive? Does it keep any absolute path links? Feb 12 11:02:28 @NiniC0c0: If you want to solve this from a YP/OE point of view you could use dynamic layers, as qchulz pointed out. It's something like: if this layer exists than use that. Feb 12 11:02:57 Only if the networking layer is in bblayers.conf apply the ntp bbappend: https://gitlab.com/meta-layers/meta-resy/-/tree/master/dynamic-layers/networking-layer/recipes-support/ntp Feb 12 11:03:06 RobertBerger Thx I will alose take a look to dynamic layers Feb 12 11:05:17 kayterina: best practise is to get a disk thats big enough. second best practise is cleaning out sstate (even if only partially). moving the build does not work as easily as you might think, at least tmp has to be recreated from scratch, and probably most of the pathes in local.conf and bblayers.conf have to be checked. Feb 12 11:06:58 so if tmp has to be recreated then I just as easily start over in the new disk Feb 12 11:07:51 kayterina: sstate can be moved. tmp cannot. Feb 12 11:08:11 a..ok. Feb 12 11:08:17 kayterina: you can setup a SSTATE_MIRROR too, this way it's not entirely from scratch Feb 12 11:08:42 I think some people also have the sstate cache on an NFS? Feb 12 11:09:09 qschulz: NonFunctionalStorage? Feb 12 11:09:22 kayterina: i'm not really sure why you want to rsync whole poky? Feb 12 11:10:30 perhaps I have a wrong directory structure then. I hae my build in ~/poky/build and my layers in ~/mylayers/meta-blabla so I though I hae to move, so I move everything Feb 12 11:10:47 rsync is just what I use to move big files Feb 12 11:10:47 kayterina: s/perhaps/g Feb 12 11:11:42 kayterina: what exactly do you want to move and for which usecase? Feb 12 11:11:42 kayterina: instead of wasting time with copying, watch a couple of my videos where i explain how to lay out your project. especially the kas one essentially shows it step by step. Feb 12 11:11:53 qschulz: low disk space. Feb 12 11:12:18 LetoThe2nd: link please? Feb 12 11:12:28 *sigh* Feb 12 11:12:32 I only found 3 on vimeo Feb 12 11:12:59 LetoThe2nd: then INHERIT += "rm_work" and some cleanup of the sstate-cache with find -atime +30 -delete? Feb 12 11:13:20 kayterina: you're really not exactly a prime example of showing initiative and effort, i have to say. https://youtu.be/KJHJlOtTdaE Feb 12 11:13:47 aouch. youtube..sorry leto Feb 12 11:13:48 qschulz: i don't like rm_work. cleaning out with a carefully chose atime can make sense, yes. Feb 12 11:14:33 kayterina: we've only been using youtube for one and a half years, so yes, it comes really surprising! ;-) Feb 12 11:14:44 Feb 12 11:18:27 LetoThe2nd: we use rm_work for years already :) Feb 12 11:19:14 qschulz: thats why i explicitly chose the wording "i don't like". not claiming my POV matches all others :) Feb 12 11:37:58 Hi Feb 12 11:38:43 Do I have to reboild everything after I add a new distro_feature, or can i just somehow rebuild the related packages Feb 12 11:40:33 build the image and it will rebuild what is needed Feb 12 11:40:52 "it's magic!" Feb 12 11:40:53 if you'd turned on hash equiv it will rebuild less Feb 12 11:46:58 kanavin_home: I tried your patch on a -next build. Its interesting as we got numbers of 53, 53, 57, 58 as being different on the different distros Feb 12 11:47:13 It should also work with the distro_features addition? Feb 12 11:47:28 Then I'm in trouble :D Feb 12 11:48:25 Because it thinks it has nothing to do Feb 12 11:49:02 linums: *guess*: you added it in the image recipe. Feb 12 11:49:14 I did Feb 12 11:49:28 Throug an distro_features_append list Feb 12 11:49:53 linums: distro feature is to be set in.... distro configruation file :) Feb 12 11:50:21 Yeah, I knew, that it is not the best idea, to add it from an image recipe :D Feb 12 11:50:37 linums: well it's not that it's not the best idea, it's just that it won't work Feb 12 11:51:10 Whatever does not work, sounds like a bad idea to me :D Feb 12 11:51:37 anybody, please sing along with yocto chant #1.... Feb 12 11:54:00 I think we got some people interested for a Ubuntu meta-deb (meta-rpm like) on the mailing list :D Feb 12 12:11:30 qschulz: whenever i read the closing lines there, i really wonder if its the language barrier or it such folks really think its ok to just request/demand Feb 12 12:15:00 Heey, it moving the distro_features to it's proper place worked! Thanks :) Feb 12 12:21:43 doing something properly works? OMG! Feb 12 12:28:23 exit Feb 12 12:52:26 rburton: the answer is: "i want an ubuntu but all i got from my hw vendor was this yocto bsp layer" :) Feb 12 13:08:05 I'm trying to build the extensible SDK, but I get some really strange error about do_prepare_recipe_sysroot failing with exit code 'setscene whitelist'. Any ideas? Google did not give me much. Feb 12 13:52:48 @LetoThe2nd: and it's even worse if the one who asks is the BSP Layer vendor ;) Feb 12 13:53:25 RobertBerger: gosh didn't realize that. Feb 12 13:53:39 @iceaway: Which meta-layers did you include? Feb 12 13:53:52 @LetoThe2nd: I also commented on the mail ;) Feb 12 14:13:51 The top level LICENSE for gnutls is "GPLv3 & LGPLv2.1+" because some parts are LGPL and some are GPLv3. However, some recent (dunfell?) change made this top level license the one that applies to generated -locale packages Feb 12 14:14:18 LetoThe2nd: the request thing is a language nuance, it is really meant as polite way of asking for help, not in the "demand" sense Feb 12 14:14:36 LetoThe2nd: I only figured it out when watching startrek TNG :) Feb 12 14:14:37 This makes my images fail because we don't allow GPLv3.... I'm fairly certain that the -locale files don't fall under the GPLv3, but I'm not sure how to tell bitbake that Feb 12 14:15:11 marex: thanks! Feb 12 14:15:14 LetoThe2nd: I think Worf used it there a few times, to ask for a favor Feb 12 14:15:22 LetoThe2nd: when talking to Picard iirc Feb 12 14:17:48 RobertBerger: cant you just build custom u-boot/kernel package for ubuntu ,create a ppa , apt install it and done ? Feb 12 14:18:16 the later is I think make deb in linux sources, the former ... look at the debian package for u-boot and add a patch or somesuch Feb 12 14:35:45 activating ccache globally in local.conf should not trigger rebuilds by itself, right ? it does that to me in gatesgarth :( Feb 12 14:42:50 yann: It's missing some `BB_HASHBASE_WHITELIST` entries Feb 12 14:43:42 yann: Basically, since all of the CCACHE_* variables are `export` they will be include in every task hash, which is probably not what you want Feb 12 14:52:38 @marex: I guess you could. Not sure if you can use it commercially after and/or call it Ubuntu. You might need to remove all the references to "Ubuntu". Feb 12 15:00:17 RobertBerger: you just dont distribute anything from the official images, just the PPA, that's not called ubuntu, but a custom repo Feb 12 15:02:20 @marex: We need to know the use case. Even if you don't distribute anything, someone eventually will. Feb 12 15:06:36 RobertBerger: so better do it the OE way, distribute the sources and let others compile them , hehe Feb 12 15:07:51 @marex: Still the same problem once you ship the product. One of my customers actually wanted to compile Ubuntu sources with OE and asked Canonical. They did not allow it ;) Feb 12 15:09:45 RobertBerger: as in what apt source + dpkg-buildpackage does ? how can canonical stop you from doing so with inherently free or opensource software ? Feb 12 15:10:06 RobertBerger: are you sure it wasnt something something about branding ? Feb 12 15:10:38 RobertBerger: it sounds like the firefox/iceweasel situation in debian Feb 12 15:10:44 @marex: Trademark is for sure an issue. Feb 12 15:12:15 RobertBerger: if you are compiling linux kernel, what does canonical has to do with it ? Feb 12 15:12:20 *have Feb 12 15:12:31 RobertBerger: dtto for u-boot Feb 12 15:13:01 @marex: Nothing I would say. Feb 12 15:13:16 @marex: It's more about the rootfs and all the Ubuntu strings in there ;) Feb 12 15:13:30 JPEW: re: how to tell bitbake what license a package is for gnutls Feb 12 15:13:38 RobertBerger: wget it from ubuntu.com Feb 12 15:13:49 RobertBerger: there, no problem, canonical is distributing that part Feb 12 15:13:52 JPEW: LICENSE_${PN}-xx = "LGPLv2.1+" Feb 12 15:14:14 RobertBerger: you just installed packages from ppa into your debian-derivative distro, that's built into ubuntu, so shrug Feb 12 15:14:18 See the gnutls recipe -- this is done for several packages Feb 12 15:14:33 (yes, it is a workaround, and the result would be poor, obviously) Feb 12 15:15:25 @marex: You mean your end customers know how to build and install packages? That's not how Embedded Systems are usually sold. Feb 12 15:17:38 JPEW: so all that's needed is to list them all in `BB_HASHBASE_WHITELIST`, right ? Feb 12 15:17:43 RobertBerger: we cannot assume that the original poster of that email thread plans to deliver a product Feb 12 15:18:01 @marex: Oh yes we can ;) Feb 12 15:18:13 RobertBerger: we can ? :) Feb 12 15:18:37 RobertBerger: it could be they are just preparing the building blocks for someone else Feb 12 15:19:19 @marex: Sure ;) For fun and non profit. Feb 12 15:19:34 @marex: "Internal use only" Feb 12 15:23:57 sakoman: That doesn't appear to work though Feb 12 15:24:17 sakoman: If that's supposed to apply to the -locale packages, it appears broken Feb 12 15:24:54 JPEW: Oh, no that specific line applies to the gnutls-xx package! Feb 12 15:25:29 You would need to add similar -locales that you want changes Feb 12 15:26:28 I was just using that as an example :-) Feb 12 15:27:13 sakoman: Ah, right... I think the specific problem is that they packages are generated based on the configured locales, so we don't know all the names a priori Feb 12 15:28:07 I suspect that we really should be overriding the license on all of the locales Feb 12 15:28:34 Since as you say they are probably not intended to be gplv3 Feb 12 15:30:15 JPEW: we probably need some kind of locale license variable Feb 12 15:30:25 JPEW: but I do worry the locales are under GPLv3 Feb 12 15:30:53 or perhaps parts of them :/ Feb 12 15:31:20 RP: Ya, I wasn't sure how that would work..... would they have the license of the source file they came from? Are they considered "data"? Feb 12 15:33:38 JPEW: does the source actually say? Feb 12 15:34:15 Not really: https://gitlab.com/gnutls/gnutls/-/blob/master/po/POTFILES.in Feb 12 15:37:36 JPEW: https://translationproject.org/POT-files/gnutls-3.6.8.pot Feb 12 15:38:00 JPEW: in the source it has .po files with "This file is distributed under the same license as the gnutls package." Feb 12 15:40:40 RP: Yep, which is a little confusing since the source code has components under difference licenses: https://gitlab.com/gnutls/gnutls/-/blob/master/LICENSE Feb 12 15:40:48 JPEW: right :/ Feb 12 15:40:57 JPEW: I think we may have to ask them Feb 12 15:42:34 RP: Will do Feb 12 15:43:06 we can't use something like `INHERIT_append_pn-linux-vanilla = " ccache"` in local.conf, it *has* to be += in a .bbappend, right ? Feb 12 15:45:31 yann: INHERIT is processed globally long beroe the recipes are parsed at all Feb 12 15:45:38 += in a bbappend will do nothing either Feb 12 15:45:45 'inherit foo' in a bbappend will do fine Feb 12 15:47:14 yeah, looks better already, thx! Feb 12 15:47:54 I've tried adding BB_HASHBASE_WHITELIST += "CCACHE_TOP_DIR CCACHE_BASEDIR CCACHE_COMPILERCHECK CCACHE_CONFIGPATH CCACHE_DIR CCACHE_NOHASHDIR" to my local.conf in the hope to allow reuse Feb 12 15:48:26 of sstate as mentionned by JPEW , but that juste makes the task hashes unstable, can't see why Feb 12 16:13:50 RP: I asked on their ML. I think I'll also add a "nls" PACKAGECONFIG to disable the translations (I don't *really* care about having them) Feb 12 16:14:32 JPEW: fair enough. Its going to become an increasing problem :/ Feb 12 16:15:08 JPEW: the point of splitting them into packages is so you can ignore them Feb 12 16:30:29 RP: Hmm, ya. Weirdly, if they weren't split into separate packages, I suspect they would have defaulted to the ${PN} package... Feb 12 16:30:44 Which is LGPLv2 Feb 12 16:31:05 * JPEW verifies that assumption Feb 12 16:32:13 Nope, I was wrong. There is a ${PN}-locale package Feb 12 16:34:33 But, perhaps the split locale packages should pick their license from LICENSE_${PN}-locale (if set), that way you can at least set the license for all of them without having to know all the generated names first Feb 12 16:41:17 is there a way to pull a dso from a foreign architecture? currently when including that, the build tries to strip it and to check symbol versions, etc., is there a way to skip it? I checked some steps and related "inhibit" variables but don't seem to find a good solution for that. Feb 12 16:41:43 say pull an aarch64 dso into a x86 build, etc. Feb 12 16:42:53 weltling: Ya, you have to set a few variables to prevent it from doing those things, but it's possible Feb 12 16:46:23 JPEW, could you help identify those please? my latest status i was setting INHIBIT_PACKAGE_STRIP_FILES, but then dnf tries to look into the symbol versions and obviously libc is a wrong one so it fails Feb 12 16:47:14 i was looking also at a wiki page about including binary artifacts into the build, that however concerns including artifacts of the same arch Feb 12 16:47:27 weltling: Hmm, Can you put the file somewhere other than a standard library search path? Feb 12 16:48:26 JPEW, it is in a subfolder of /usr/lib, but it wouldn't be found by default i think Feb 12 16:48:38 dfn finds it because it's listed explicitly in the spec, i think Feb 12 16:49:13 hmm, perhaps it could be put somewhere else, but i'll need to be available at the exact place then Feb 12 16:49:13 weltling: EXCLUDE_FROM_SHLIBS = "1" maybe? Feb 12 16:49:46 weltling: Reading through package.bbclass is helpful for these sorts of things Feb 12 16:51:31 JPEW, oh, reading teh doc, perhaps PRIVATE_LIBS could work, gonna try Feb 12 16:51:58 yeah, read the package.bbclass but obviously overseen these possibilities Feb 12 16:52:44 otherwise, it won't be possible to exclude all the dso at once, as the package produces also some libs that are from the same arch and are needed Feb 12 16:53:11 perhaps those of the foreign arch could be separated out, that might be an idea, too, then they could be excluded from shlibs Feb 12 16:53:24 weltling: You might try to split them apart if possible Feb 12 16:53:29 yeah Feb 12 16:53:57 JPEW, thanks a lot for the help! Feb 12 16:54:03 weltling: np Feb 12 17:05:37 I am trying to make a recipe:task that will automatically run the do_test task on every recipe that inherits from a bbclass (the reason why I want this is at the bottom if anyone cares). Feb 12 17:05:37 My current idea is to have the bbclass add the do_test task and then add that do_test task as a dependency to company-tests:do_test. That way, you can run Feb 12 17:05:37 bitbake company-tests -c do_test Feb 12 17:05:37 and that automatically executes recipe1.bb:do_test, recipe2.bb:do_test, etc. Feb 12 17:05:37 This solution turns out to be pretty hard to do. I have been digging through the bitbake source code, and I can not find a way to add a dependency to a task from a different recipe. I am using an anonymous python function, so my code runs before tasks start being executed, but I have not figured out how to access the datastores of other tasks. Maybe there is no reliable way to do this since the recipes are not Feb 12 17:05:38 necessarily parsed in any specific order. Feb 12 17:05:39 I have several other less attractive ideas too, like in company-tests:do_test, scanning all parsed recipes to find ones that inherit from my bbclass and then dynamically those recipe''s do_test task to the scheduler (this would make the count of necessary tasks inaccurate, so I would love a better option). Feb 12 17:05:40 Does anyone have any ideas on how to achieve my goal? I am open to all alternatives. Thanks in advance. Feb 12 17:45:25 diamondman: Do you know "bitbake-layers show-recipes -i " ? Feb 12 17:46:30 (This is the bitbake way of getting all recipes inheriting a class) Feb 12 17:47:54 Anyone seeing python3 core dumps with 3.1.5? Seems to only show up on our epyc server. Feb 12 17:48:10 @yocton I can look at how that code works and try to borrow it. Feb 12 17:48:15 diamondman: I would start from that, process a little then give it to bitbake -ctest ... Feb 12 17:48:55 @yocton are you recommending triggering a 2nd run of bitbake after determining which recipes need to be tested? Feb 12 17:52:35 "recommend" is a stong word ^^ For what I'm suggesting, you'll have one bitbake-layers run (getting the list of recipes) and one bitbake run (running the do_test task on all recipes) Feb 12 17:55:26 Also, I wonder if "bitbake -c test world" would work Feb 12 18:10:19 yocton: you'd probably need --runall=test if you want a task run on all the dependencies of the specified target, not just the target. Feb 12 18:10:34 * kergoth yawns Feb 12 18:10:42 that's only if the task exists everywhere, of course. Feb 12 19:25:32 RP: I'll send a better patch in a moment Feb 12 19:26:25 the one that directly prints: Feb 12 19:26:27 2021-02-12 00:54:44,402 - oe-selftest - INFO - Reproducibility summary for ipk: same=11252 different=0 different_excluded=71 missing=0 total=11323 Feb 12 19:26:27 unused_exclusions={'quilt-ptest', 'valgrind-ptest', 'vim', 'meson', 'kernel-devsrc', 'git', 'groff', 'watchdog', 'ruby', 'dtc'} Feb 12 20:04:54 It looks like python3-native is broken on 3.1.5 on epyc servers.. (still looking) Feb 12 22:17:13 It seems that setting VOLATILE_LOG_DIR = "no" recompiles everything :) Feb 12 22:33:51 why though :/ Feb 12 22:50:14 JPEW: we need to have a call some day about icecream in a tekton context. It seems to me that a commodity device cluster could be running _many_ iceccd nodes. Ideally that would be containers running on k8s nodes. Feb 12 22:51:02 JPEW: that might also be the path to scaling in the cloud (since passing sstate around is somewhat meh). Feb 12 22:51:32 JPEW: also, do you have a public repo for the Dockerfiles from your docker hub images? Feb 12 23:07:31 moto-timo: Ya. Using icecream would heavily depend on your usecase... I think more parallel builds (even with meh sstate) is probably always going to beat icecream Feb 12 23:07:57 But if you have a small number of mega builds that take forever, then it would make more sense Feb 12 23:09:14 I have not published my Dockerfiles... most of them are just modifications of the upstream ones (labgrid, crops); I've been a little tardy publishing them (and, I'm not sure if upstream can really take the changes anyway) Feb 12 23:10:46 Out of curosity, what's the meh part of sstate? Feb 12 23:13:15 kanavin_home: excellent, that sound great :) Feb 12 23:13:37 true cloud: e.g. s3 storage of sstate has been reported to be entirely too slow to be useful. Instead people recommend passing tarballs around. Feb 12 23:13:49 JPEW: I have no actual experience yet. Feb 12 23:14:43 moto-timo: I think S3 is the wrong backend to use. You want an NFS server backed with a reasonable disk Feb 12 23:15:23 I think you can get away with running an NFS server in a container backed by a persistentVolumeClaim Feb 12 23:15:44 (if you don't have access to a NFS server) Feb 12 23:15:57 JPEW: I think NFS in the cloud starts to get us rapidly out of reasonably priced? Feb 12 23:16:57 JPEW: I have two goals: local cluster (including a bunch of rpi4s, AWS/GCE/Azure/DigitalOcean/CloudDuJour Feb 12 23:17:28 It depends on how you do it.... for example if you use the "hosted" NFS servers like AWS EFS (which is probably really fast) that can be expensive Feb 12 23:18:04 JPEW: that is what I meant from my limited knowledge Feb 12 23:18:30 But, if you just buy a e.g. 1TB block device and mount it as the backing to a container running an NFS server its pretty reasonable Feb 12 23:18:58 Not as cheap as S3, but not too bad Feb 12 23:20:52 I would love to actually set up an autobuilder in the cloud like that, have a set of the needed images ready on the shelf Feb 12 23:20:55 Also depends on what you classify as "expensive" Feb 12 23:21:11 RP: You're going to love my talk :) Feb 12 23:21:43 Recently ran across our colo bill. I think we could buy a reasonable chunk of cloud for that price, too. Feb 12 23:22:31 But yes, mounted NFS for download and sstate cache is what we use, works well. Feb 12 23:22:54 moto-timo: I priced out a 1TB sstate cache on Azure at ~$2000/yr. If you're doing cloud builds, that a pitance compared to what you pay to actually *do* any meaningful amount of builds Feb 12 23:24:22 And, that's using their hosted NFS "files" instead of a block device + NFS in Kubernetes... suprisingly it comes out cheaper until you start to get to larger sstate sizes Feb 12 23:25:15 And, that price assumes that your using the full 1TB for a whole year, when it's really pay-for-what-you-use Feb 12 23:25:38 https://github.com/madisongh/tegra-test-distro/wiki/Using-AWS-S3-for-sstate-and-downloads-mirrors Feb 12 23:26:19 I doubt you need a TB of sstate. We do "turn the last build's sstate cache into an sstate mirror for the next build, delete everything that's not used". Feb 12 23:26:32 matt had some changes to the "s3" fetcher... I have not tried it Feb 12 23:26:34 sstate is < 100G Feb 12 23:26:45 neverpanic: Ah, but see if you use an NFS server, you don't need to do that Feb 12 23:27:04 neverpanic: my sstate is bigger than that... lots of layers and configurations and I don't flush it often? Feb 12 23:27:08 Your builds directly use NFS as the sstate cache. There is no "upstream mirror" Feb 12 23:27:39 JPEW: you eventually do, unless you want your sstate to grow unbounded. we're running a couple thousand builds a month, so really don't want every single one of those one-off builds to end up in your sstate forever. Feb 12 23:27:46 we need this tribal knowledge to get out of the minds of those that have done it. It is very confusing for folks new to the cloud. Feb 12 23:28:18 sstate-mirror on the same nfs has the advantage of creating symlinks, so its a case of replacing those symlinks with hardlinks after the build, then deleting the old mirror Feb 12 23:28:22 @RP: I agree, an AB in the cloud would be fantastic Feb 12 23:28:27 neverpanic: Ah, sure. But you can do that out-of-band of your actually builds Feb 12 23:28:35 neverpanic: I know the autobuilder runs sstate into multiple TBs but the autobuilder workload is unusual Feb 12 23:28:43 and we do run cleanup on that Feb 12 23:29:07 @RP: the AB is more sstate than I normally run, but exactly my point ;) Feb 12 23:29:09 neverpanic: And in that cause, S3 makes a lot of sense as an upstream Feb 12 23:29:47 But, you don't really want every build to have to figure out whats changed and upload it to S3... it's just slow. They should only read from it Feb 12 23:30:34 I know this because at $WORK we *don't* have NFS and have to rsync to a server at the end of every build. *sad trombone* Feb 12 23:31:45 JPEW: :( Feb 12 23:33:43 RP: I'm working on it... it just take a while Feb 12 23:34:19 JPEW: I know how companies work :/ Feb 12 23:34:31 @RP or NOT Feb 12 23:35:03 https://at.projects.genivi.org/wiki/download/attachments/16027368/GENIVI_AMM_2018_Achim_Demelt.pdf?version=1&modificationDate=1524099162000&api=v2 pages 4 and 7 have some numbers. apparently we're getting by with 30G of sstate cache. Feb 12 23:35:23 but yeah, the autobuilder isn't exactly our use-case. Feb 12 23:36:18 moto-timo: I did a whole price breakdown a while ago for building in Azure, and you could get about 500,000 build-hours/year (with NFS sstate & hash equiv) for about $100,000 Feb 12 23:37:03 that sonuds quite cheap, actually Feb 12 23:37:45 neverpanic: the autobuilder is different as it tests a ton of weird usecases nobody probably cares about as whole, only as some subset and we build all arches/libcs whereas anybody else probably again picks a subset Feb 12 23:38:02 neverpanic: Ya, it's not as bad as I though.... it really gives you drive to make builds as fast as possible to save costs. In that regard, NFS sstate pays for itself *really* quick Feb 12 23:38:10 JPEW: any idea how many build-hours we do on the autobuilder in a year? Feb 12 23:38:27 RP: I haven't the slightest idea Feb 12 23:38:39 JPEW: what is your definition of a build-hour ? Feb 12 23:38:54 One CPU/hour of build time Feb 12 23:39:20 build-hours is a bad metric. SHould be CPU-hours Feb 12 23:39:29 JPEW: one virtual cpu core? Feb 12 23:39:36 RP: Correct Feb 12 23:40:03 halstead: how many cpu cores do we have in typhoon, roughtly? Feb 12 23:40:56 I could do with a decent illustration that the autobuilder is cost effective :) Feb 12 23:41:52 RP: Oh, ya I'm sure the AB is a lot cheaper than the cloud would be... this also doesn't factor in artifact storage which would also cost money Feb 12 23:42:23 JPEW: I'm sure it is too but I'm forever being asked why we do this when cloud is obviously the solution ;-) Feb 12 23:42:59 RP, On the controller? Just 2. Feb 12 23:43:36 halstead: sorry, I mean overall in the workers as in the overall infrastructure Feb 12 23:43:50 RP, On the cluster... I can get that quickly. Feb 12 23:44:15 halstead: 56*24 ish? Feb 12 23:45:22 halstead: I know you did an assessment of bare-metal vs. cloud at some point and bare-metal still one (or so I recall RP saying :) Feb 12 23:45:26 RP, Yes. That's about how many hyperthreaded cores Feb 12 23:45:41 JPEW: so I think we have about 11million cpu-hours/year available Feb 12 23:46:47 halstead: thanks, I was just curious on a rough number, its interesting to see the "price" Feb 12 23:47:22 ($2,200,000 cloud cost) Feb 12 23:47:26 moto-timo, Yes the colo plus three year hardware refresh was about 1/3 the price of similar capacity in a major cloud provider. Going with someone like Hetzner of OVH we could come close to price parity but there were network capability issues last time I looked. Feb 12 23:47:36 RP: Ya, if you were going to go that big you can do "Reserved capacity" where you basically rent a virtual instance permenatly which is usually about a 40% discount, but still Feb 12 23:48:23 JPEW, We do look at reserved pricing. And we factor in non-profit discounts as well. Feb 12 23:48:39 Ah, ya Feb 12 23:51:55 JPEW: sorry, I didn't mean to derail your conversation. I was just curious how we compare with the numbers Feb 12 23:53:27 RP: Heh, it's ok. It's supper time anyway. Night all Feb 12 23:53:58 JPEW: 'night! Feb 12 23:54:41 Good night. Feb 12 23:56:32 * RP should sleep too Feb 13 00:14:58 Thank you all for the discussion. We need to compare notes more often. **** ENDING LOGGING AT Sat Feb 13 03:02:54 2021