**** BEGIN LOGGING AT Thu Jan 14 02:59:58 2016 Jan 14 08:43:43 good morning Jan 14 09:04:12 there seems to be a potential race condition in package_manager.py in OpkgIndexer.write_index(). if e.g. "all" appears twice in ALL_MULTILIB_PACKAGE_ARCHS/SDK_PACKAGE_ARCHS/MULTILIB_ARCHS, then two opkg-make-index instances will be run in parallel on tmp/deploy/ipk/all/Packages. i think this is what causes https://www.mail-archive.com/yocto@yoctoproject.org/msg22808.html. we are seeing the same error. Jan 14 09:04:18 any comments? Jan 14 13:00:54 hi Jan 14 13:01:09 i want to ask a question Jan 14 13:04:30 just ask Jan 14 13:05:42 yocto project is like lfs Jan 14 13:05:44 ? Jan 14 13:06:27 i want to make a small distro (64bit multilib musl based ) Jan 14 13:06:32 can i use yocto Jan 14 13:08:09 i saw this link git://git.yoctoproject.org/meta-intel -b jethro Jan 14 13:08:35 is it just for intel mainboards or can i use it for amd mainboards? Jan 14 13:09:25 shortly can we make a distro like debian,arch,gentoo with yocto ? Jan 14 13:17:42 short answer is yes Jan 14 13:18:00 that is what OE is for Jan 14 13:18:58 amd is supprtoen Jan 14 13:18:59 ? Jan 14 13:19:09 supporten by yocto? Jan 14 13:19:33 well can i pick what i want ? Jan 14 13:19:45 e.g musl instead glib Jan 14 13:19:57 ee.g i dont want use systemd Jan 14 13:23:24 just curious,why yocto is not famous like lfs or other toolchains Jan 14 13:23:50 is there any desktop distro that you know based yocto Jan 14 13:28:37 supported, I type bad Jan 14 13:28:57 yes, you can chose musl and sysvinit Jan 14 13:30:29 good news. Jan 14 13:30:49 what about big distro like debian-arch... Jan 14 13:31:04 do you know a project Jan 14 13:31:18 that uses yocto for desktop systems Jan 14 13:38:58 or how can i create? Jan 14 14:01:05 any idea how to set properly armhf as an architecture for the package manager ? Jan 14 14:10:18 in what capacity, and what package manager? Jan 14 14:10:25 package arch, compilation arch, etc? Jan 14 14:10:53 (and AFAIK, 'armhf' as a package are is invalid. There isn't enough information there to tell us what kind of package you are installing for compatibility) Jan 14 14:13:42 I'd like to install packages with apt-get. Problem is I kinda forced the compilation of armhf setting the package_deb.bbclass with DPKG_ARCH ?= "armhf" but on the system itself, I have to force install the deb files because he setup is done for armel Jan 14 14:23:19 ya, sounds like you broke the configuration. Jan 14 14:23:44 The package arch of the system is defined by the tune files (and recipes).. and the package manager itself should have been setup to match as appropriate.. Jan 14 14:23:59 I don't use deb, just opkg and rpm -- both of which do work in this configuration, I'd assume deb does as well. Jan 14 14:24:30 If you don't like the package architecture of your configured system, you can change it -- but you need to make sure everything is consistent. Jan 14 14:25:38 The main package architecture is defined in bitbake.conf as: Jan 14 14:25:39 bitbake.conf:PACKAGE_ARCH ??= "${TUNE_PKGARCH}" Jan 14 14:26:11 You would likely need to change that to your custom value(s) if you wanted something different. Also there are other things that check that PACKAGE_ARCH is contained in other lists, so those lists will have to be defined for your custom configuration as well Jan 14 14:26:32 -if- the issue is simply that apt-get isn't confgiured properly to match the constructed system -- then that is a very different (and likely easier) problem to solve Jan 14 14:31:45 oh ok Jan 14 14:34:44 yeah there is clearly no consistency, I'm gonna try, it seems tricky Jan 14 14:35:42 anybody tried yocto in x86_64 arch? Jan 14 14:45:36 is there a way to see complete lists of valid PACKAGE_ARCH, MACHINE_ARCH ? Jan 14 14:54:23 mahmutov: yes, many people use yocto to build x86-64 (or sub-arch) targets. Jan 14 14:54:37 any example? Jan 14 14:54:45 i want to use it too Jan 14 14:55:05 but if ican build a distro like dubuntu,arch or any Jan 14 14:55:20 or any desktop project based on yocto Jan 14 14:59:02 mahmutov: usually it makes sense to say "if i want something like debian - then i use debian" Jan 14 14:59:22 no it isnt answer :) Jan 14 14:59:28 it is easy answer Jan 14 15:00:00 mahmutov: while its technically possible to use poky as the foundation of a full-featured desktop distribution, it is nowhere trivial. neither in terms of time nor technical resources needed Jan 14 15:00:52 mahmutov: so of course, go ahead and build something for one or two desktops to toy around. there's even a bootable iso target supported. but don't expect it to be comparable to the big names you mentioned. Jan 14 15:01:20 ok first i dont want compare big names Jan 14 15:01:30 why did you use them then? Jan 14 15:01:32 just i want to make my own system Jan 14 15:02:01 i was looking lfs a while ago Jan 14 15:02:28 i want to make a distro from strach Jan 14 15:02:33 from source Jan 14 15:02:55 use poky, x86_64 machine and output format iso. then play around. Jan 14 15:03:56 ok let me look Jan 14 15:04:16 also i want to use newer versions of packages Jan 14 15:04:19 like gcc5.3 Jan 14 15:05:25 why don't you start with something working first, and then adjust step by step? Jan 14 15:07:36 how? Jan 14 15:08:02 16:02 < LetoThe2nd> use poky, x86_64 machine and output format iso. then play around. Jan 14 15:08:09 i want to make some works auto Jan 14 15:08:22 like toolchaşşn,etc. Jan 14 15:08:26 toolchain Jan 14 15:10:01 have you ever looked at the build process and what it does before saying what you want? Jan 14 15:10:24 i find it a bit hard to talk about things otherwise. Jan 14 15:12:24 the yocto project quick start documentation is an excellent source of information there, even including exact steps to follow along for a first try! Jan 14 15:13:34 ok i will look Jan 14 16:20:29 Hi, can anyone tell me if native recipes are for the target or host? Thanks Jan 14 16:20:49 for example, qttools and qttools-native Jan 14 16:21:33 if i am cross compiling for an ARM system on my laptop, will qttools be for the ARM system and qttools-native be for the x86? Jan 14 16:25:08 dtran11: yes Jan 14 16:25:22 thanks Jan 14 16:26:00 i always used the native term to mean natively on the target Jan 14 16:26:08 so i have to reverse my thinking Jan 14 16:29:31 does anyone know how to properly run a custom driver installation script from a bitbake task? I'm having difficulties with the kernel headers not being found Jan 14 16:34:45 native is host, nativesdk is "SDK host", cross is "runs on the host, and does something target specific", crosssdk "runs on the host and builds for the SDK host", cross-canadian "builds on the host, runs on the SDK, targets the 'target'" Jan 14 16:34:54 and "target" (or no specific name) is target software Jan 14 16:35:01 the naming is very host-machine specific Jan 14 16:37:31 fray, thanks for the detailed explanation Jan 14 16:39:35 kergoth: are you involved with the git.yoctoproject.org at all? Jan 14 16:40:10 nope Jan 14 16:40:30 know who is on this channel? Jan 14 16:50:29 are you having problems w/ the git server or looking for write access? Jan 14 16:50:40 I can likely direct you to the right person, just need to know hwat you need Jan 14 16:58:58 I know this question probably comes up a thousand times a day, but I'm working on a custom recipe, and it doesn't seem to be pulling my latest version from git Jan 14 16:59:06 I have bitbake -c clean myrecipe Jan 14 16:59:12 and bitbake -c cleansstate myrecipe Jan 14 16:59:48 what is your srcrev set to? autorev or a specific version? Jan 14 17:00:03 if it's autorev it should, if it's a specific version you have to update it to match each time you change the git server Jan 14 17:00:25 I do a git clone myself in the do_fetch step Jan 14 17:01:06 the system when it runs through it's steps will adjust to match what the recipe says.. Jan 14 17:01:29 if you are going around the standard do_fetch behavior and running the commands yourself, then you'll have to do similar in do_unpack as well Jan 14 17:01:58 so my do_fetch looks like 'git clone https://github.com/myrepos.git ${S} --depth=1' Jan 14 17:02:07 oh yeah? Jan 14 17:02:13 I haven't implemented a do_unpack Jan 14 17:02:40 do fetch and do unpack work together to prepare the software for patching (do patch).. then what runs after that assumes it's all setup properly.. Jan 14 17:02:50 if you override one you may have to adjust the others.. Jan 14 17:03:09 is there a reason you are not just using the proper SRC_URI field and letting bitbake handle the fetch,unpack for you? Jan 14 17:03:37 I want to do a shallow clone, rather than a full clone of the repos Jan 14 17:04:07 any particular reason? (Shallow cloning has been something that is being generically worked on in order to save space) Jan 14 17:04:25 It's a space saving feature, mostly Jan 14 17:04:42 I'll do a quick build with the standard fetch unpack Jan 14 17:04:48 and see if it installs the correct version Jan 14 17:05:25 Hi, I have some trouble with dependencies to kernel tasks Jan 14 17:05:30 I am trying to do something after uImage is copied to deploy dir Jan 14 17:05:47 I added my code to some do_deploy_append() function Jan 14 17:05:51 but I am always getting cp: cannot stat `/yocto_bld/deploy/images/target/uImage': No such file or directory Jan 14 17:06:24 sounds like your path is wrong.. OR you are trying to do this before the uImage is deployed Jan 14 17:06:33 path is right Jan 14 17:06:51 if I turn off this code, uImage is copied to this exact location Jan 14 17:07:06 that copy happens during the deploy. Your step is running BEFORE the deploy Jan 14 17:07:15 you need to fix your dependencies from one task to another Jan 14 17:07:40 ok, I assumed that do_deploy_append happens after do_deploy Jan 14 17:07:55 no.. do_deploy_append says "add my code to do_deploy Jan 14 17:08:16 there are pre and post chunks.. as well as other _appends that may be running.. Jan 14 17:08:18 ok, so I need to add task that depends on deploy then Jan 14 17:08:19 ? Jan 14 17:08:21 order of _appends is not defined.. Jan 14 17:08:28 you want a new task that depends on deploy Jan 14 17:08:39 ok, thanks Jan 14 17:09:00 Ok that fixed it Jan 14 17:09:09 but why - I'd still like to be able to do my shallow clone Jan 14 17:09:38 because you had mismatches in what the system was doing.. if you overide fetch you need also overide unpack so thing remain consistent Jan 14 17:10:10 So for unpack I do - what, nothing? My install step literally copies the shallow git repos to the destination rootfs Jan 14 17:10:21 so do I just override unpack and return? Jan 14 17:10:57 Are you adjusting the checksums and such, since you aren't cloning down a specific revision, you need to tell bitbake to always run the do_fetch no matter what.. Jan 14 17:11:06 (which in turn will re-run all of hte other code) Jan 14 17:11:20 the built in fetcher has mechanisms in place that (even with AUTOREV) will only rebuild the package if the upstream rev has changed.. Jan 14 17:11:25 you'll need ot duplicate that kind of code.. Jan 14 17:11:41 otherwise the system will assume it's been run and just not re-run it.. Jan 14 17:11:56 buuuugh ok I'm sold I'll use the built in fetcher Jan 14 17:11:59 what a pita Jan 14 17:12:15 thats why all of this is in place to hide the mess from you. Jan 14 17:12:29 There's got to be a template for this kind of thing? I can't be the only one who wants to do this - Jan 14 17:12:42 As for shallow cloning a few people were working on it. I don't know the progress of their work, you might want to check in here and #oe to see who and what progress has been made.. Jan 14 17:13:46 I saw something about it a while back, but I'm bound to an old version of the build system anyway because it's vendor-furnished Jan 14 17:14:04 so any sparkling new advancements in fetch options are going to be lost on me anyhow. Jan 14 17:17:31 actual shallow clones are unlikely to ever be supported, with the possible exception of the AUTOREV case, as we need to be able to fetch SRCREV, and we don't know at the time we run git clone how many commits deep SRCREV is from HEAD Jan 14 17:17:35 so we don't know the depth to use Jan 14 17:18:04 So in my case, I want to be able to pull a shallow clone (depth 1) at the current latest rev Jan 14 17:18:21 so AUTOREV would be exactly my use case Jan 14 17:18:38 The reason to pull the shallow clone is to save space, and because I'm not interested in the history Jan 14 17:18:52 But I do use git to update from that head, to future revs Jan 14 17:19:17 So the installed software, as a part of its update process, calls out to github or wherever for new versions, and pulls them down to install them. Jan 14 17:29:51 we could likely implement support specifically for autorev eventually, leveraging the shallow mirror tarball handling from my work on shallow tarball generation, but it'd be constructed differently Jan 14 17:39:43 kergoth: being able to use git fetchers all around would be cool Jan 14 17:39:56 ? Jan 14 17:39:57 kergoth: even as a patch tool Jan 14 17:40:23 we already have a git patch tool, but i suppose the fetcher could learn to do that instead of having it in the metadata. not sure what value that would add, though Jan 14 17:40:35 so if we have other SCMs or tarballs then convert the tree to git repo Jan 14 17:40:44 during unpack and patch Jan 14 17:41:11 I havent used git patch tool myself Jan 14 17:44:11 kergoth: can git patcher create git trees Jan 14 17:44:18 from tarballs Jan 14 17:44:41 if it can then I can switch to using it Jan 14 17:44:55 as it becomes quite easy to work on component patches Jan 14 17:48:55 iirc it uses am if it's already a repo and can do so, otherwise it uses apply and leaves it as not a git repo. but devtool modify -x will create the source tree as a git repo with each patch as a commit, so it sounds like that's what you want to be using.. Jan 14 17:53:16 devtool rocks for patch development. Jan 14 17:53:41 maybe a switch to specify a collection is required Jan 14 18:05:05 If I define SRC_URI and SRCREV, but then define do_fetch myself, what happens? Jan 14 18:05:42 Does the build system use the fetcher to get my code? or does it run my do_fetch function Jan 14 18:05:45 or both somehow? Jan 14 18:05:47 it runs yours Jan 14 18:06:37 (don't do that, unless you plan to implementing caching/proxies/DL_DIR/etc) Jan 14 18:06:58 ls Jan 14 18:06:59 oops Jan 14 18:07:22 My difficulty is that I don't know how to tell the build system "yes go re-run the do_fetch every time" when I build Jan 14 18:09:39 All I essentially want to do is run my own git clone, and copy the cloned repository directory to the destination Jan 14 18:10:07 and I want to be able to convince the build system that I've made changes upstream and that it needs to re-clone Jan 14 18:11:46 I thought I could "cheat" and tell it autorev so that it would know to re-run the fetch every time there was a change, but I imagine that's done in the fetcher, and not really going to help me Jan 14 18:17:45 It seems like overriding the do_fetch to do my git pull, and then overriding the do_unpack to do basically nothing does what I'm looking to do. Jan 14 18:21:55 kergoth: yes devtool I use Jan 14 18:22:28 git patcher makes sense doin git am / git apply/ Jan 14 18:22:41 depending upong repo being gitfied Jan 14 21:40:12 Oh man, I have an older computer and I needed to complete over 4k tasks. I gave up and got a spot instance on EC2, 16 cores on SSD Jan 14 21:40:20 Its screemin Jan 14 22:20:39 Hi, I have a question regarding tasks - what and where should be stated to actualy run a task during build? Jan 14 22:20:46 I wrote function that should be run during kernel build, after deploy Jan 14 22:20:57 and I did 'addtask func after do_deploy' Jan 14 22:21:07 it is listed under possible tasks for kernel, I can run it manually Jan 14 22:21:14 but I want it to be rerun during kernel build Jan 14 22:22:44 then you'll need to make it run before the default task Jan 14 22:22:50 add 'before do_build' to your addtask line Jan 14 22:22:56 there are numerous examples of this around the metadata Jan 14 22:24:01 thanks, did the trick Jan 14 22:53:22 when making a recipe that inherits autotools.. does the default do_compile() run a parallel make? Jan 15 01:16:34 I'm not sure if this question belongs here or in some more Bitbake oriented channel, but I am trying to get Intel's build of Yocto for Edison to work, and having a horrible time patching hostapd to get rid of a reference to /usr/include/libnl3 Jan 15 01:16:45 so that it won't complain about something from the host system being used Jan 15 01:17:18 I have made several other patches for other packages, and had no problem, but for whatever reason here, it keeps telling me that my patch doesn't apply as it can't find the files in question Jan 15 01:17:34 how can I get it to actually tell me *where* it is trying to apply the patch? Jan 15 01:18:37 I'm up to running 'bitbake -vDDD' now and still I don't even see so much as a working directory where it's tried to apply the patch. Jan 15 02:17:08 hodapp: it applies the patch in ${S}, the source tree. most likely S is set to a subdir rather than toplevel of the extracted sources, or your'e trying to patch generated files that don't yet exist at that point **** ENDING LOGGING AT Fri Jan 15 02:59:59 2016