**** BEGIN LOGGING AT Wed Jan 14 02:59:59 2015 Jan 14 10:42:22 hi all, Im looking for a thrift recipe, does anyone have tried to cross-compile it for ARM? Jan 14 11:12:50 kimo_: http://layers.openembedded.org/layerindex/branch/master/recipes/?q=thrift? Jan 14 11:16:58 rburton : I saw it but its a python binding not the thrift package itself Jan 14 11:17:43 the layer index is fairly comprehensive, so that's a fairly good sign that you'll need to write the recipe yourself Jan 14 11:22:02 otavio: http://errors.yoctoproject.org/Errors/Search/Details/7690/2/10/d8a2534a4ddb <— ever seen that before? Jan 14 11:35:54 how do I setup multilib in yocto? Jan 14 11:44:27 chankit1: http://www.yoctoproject.org/docs/current/dev-manual/dev-manual.html#combining-multiple-versions-library-files-into-one-image Jan 14 11:59:28 bluelightning: thanks...will read it up Jan 14 12:15:32 running a daisy qemuarm build trying to communicate over serial between host and guest. Works fine at first but once I've disconnected and then reconnected minicom on one side I can only send from guest to host. Anyone knows somethings about this problem? Jan 14 12:15:52 I'm using the '-serial pty' for qemu btw. Jan 14 12:52:57 rburton: no; never. Jan 14 13:27:20 bluelightning: one of my recipes has SRC_URI that points to a local directory which is an svn working copy. when i run the recipe, do_fetch() copies the contents of that directory under my recipes $WORKDIR. the problem is that it also copies over svn metadata (.svn) directories. how can i avoid that? Jan 14 13:29:07 darhorse: sounds like you want EXTERNAL_SRC Jan 14 13:34:06 LetoThe2nd: thanks that is useful. But can I still build under ${S} instead of the external directory itself? Jan 14 13:39:58 darhorse: i don't know actually, your question just triggered the keyword in my head. sorry. Jan 14 13:41:55 darhorse: yes you can with externalsrc - see http://www.yoctoproject.org/docs/current/dev-manual/dev-manual.html#building-software-from-an-external-source Jan 14 14:07:44 is it possible to include a .tar.bz2 image inside an .hddimg image? for example, if i want to use the .hddimage as an installer. Jan 14 14:08:00 or is there any other more clever way to do it? Jan 14 14:17:07 hugovs: well after all its just another file in your image. have a recipe that installs it (something has to use it anyways, and that something has to go into the image too probably) and be fine. Jan 14 14:27:00 LetoThe2nd: but then i would have a two step process, right? first, build the .tar.bz2 image and put the result somewhere, and then have a recipe included in the other image that finds the result and installs it. Jan 14 14:27:46 LetoThe2nd: i was just wondering if we could make an image to depend on another. Jan 14 14:29:08 I have a custom task in a recipe which runs fine after a cleanall. but when i run it again, it doesn't execute setscene tasks are executed Jan 14 14:29:59 it doesn't execute as one of the setscene tasks imean Jan 14 15:34:59 has anyone tries running metacity with oe-core Jan 14 15:35:42 I am trying to replace matchbox with metacity but after doing the changes I am getting schema not installed error Jan 14 15:59:22 bluelightning: i have a custom post-processing task that takes the output of the recipe from DEPLOY_DIR_IMAGE and generates a 'processed' version under DEPLOY_DIR_IMAGE/processed. this works find after a cleanall but when I run the recipe again setscene tasks are run. In this case my custom task doesn't run at all. I am missing something but don't know what Jan 14 16:01:18 darkhorse_: how do you define that task? Jan 14 16:01:37 specifically, what is your addtask line? Jan 14 16:02:59 do_process () { # task body} addtask process after do_deploy before do_packagedata EXPORT_FUNCTIONS do_process Jan 14 16:03:16 bluelightning: o_process () { # task body} addtask process after do_deploy before do_packagedata EXPORT_FUNCTIONS do_process Jan 14 16:03:41 and as i said it works okay without setscene Jan 14 16:05:07 bluelightning: read do_process instead of o_process in previous message Jan 14 16:11:01 darkhorse_: I think you may be better off doing this as a do_deploy_append and doing the changes in DEPLOYDIR; then your changes will be incorporated in the sstate archive for do_deploy Jan 14 16:11:18 unless you have a compelling reason to do it in a separate task that is Jan 14 16:21:10 bluelightning: my task is fairly modular and i want the user to choose whether they want to process() or not Jan 14 16:21:34 darkhorse_: how would they make that choice? Jan 14 16:32:02 bluelightning: lets say recipe name is mykernel.bb ; they could say "bitbake mykernel -c deploy" for unprocessed version and "bitbake mykernel" for processed version which would be the default Jan 14 16:34:18 bluelightning: also, the same process() will be required by quite a few recipes and i don't want to replicate do_deploy_append() everywhere Jan 14 16:41:15 darkhorse_: I see... well at minimum you will need do_build in your list of "before" tasks in the addtask line Jan 14 16:45:02 bluelightning: ok. I will try it now. Do you think i need to create a do_process_setscene task on the lines of do_deploy_setscene() (deploy.bbclass) ? BTW i have tried it already but it didn't work however that could be because of the input/output dirs that i provided Jan 14 16:46:07 darkhorse_: that's only necessary if you want the output to be saved to and restored from the sstate cache Jan 14 16:47:33 bluelightning: OK. but what I don't understand is that originally I thought that i could just set do_process[nostamp] = "1" and it should be fine because it will not create stamp files and hence will run each time Jan 14 16:48:54 darkhorse_: when you do bitbake what you are saying is effectively bitbake -c build Jan 14 16:49:19 darkhorse_: so your new task will only run if the build task depends upon it Jan 14 16:50:03 now, in the first instance it does indirectly via do_packagedata as you do currently (not entirely sure why you have that in "before" though) Jan 14 16:50:35 but when restoring the do_deploy output from sstate, that covers any tasks leading up to that point, including do_packagedata - thus your task isn't run Jan 14 16:50:54 add do_build to the before list and it will do though Jan 14 16:51:03 (and possibly remove do_packagedata) Jan 14 16:54:02 bluelightning: AH!! thanks a ton :) you just saved a silly newbee's day Jan 14 16:54:39 bluelightning: adding do_build in the 'before' list works Jan 14 16:59:05 great! no worries Jan 14 18:52:00 bluelightning: is it possible to just populate sysroots from a state cache as the first step of the build? I am thinking of a mechanism to allow application software developers to get hold of necessary OS support without actually building Jan 14 18:54:16 darkhorse_: I believe that's what ASSUME_PROVIDED is for. You just have to be careful that it has what the recipe is expecting... might be a specific version or patches... Jan 14 18:55:12 there's also something that'll get rebuilt from sstate but i don't remember what... might actually be the sysroots Jan 14 18:57:09 no, assume provided is for stuff that's already available on the host and won't be built at all, sstate or otherwise Jan 14 18:57:35 Aethenelle: hmm.. ASSUME_PROVIDED is mainly for the native tools. what i am thnking of is that let's say you build an arm or mips kernel and a base image that would populate mips or arm sysroot. now we have another team of software developers who are interested in that sysroot directory and they don't care about the kernel and filesystem image Jan 14 18:57:54 FOO := "${@savemyvar('TARGET_OS', d)}" Jan 14 18:58:35 if they're working off the same temp dir, everything from the original build should still be around and the compile step for it skipped Jan 14 18:59:17 iirc, it can even be recovered from the sstate if the sysroots are deleted Jan 14 18:59:43 darkhorse_: the sysroot is for internal build use only. no one should be poking at it extrernally. if you want to ship a sysroot to your developers, build an sdk (meta-toolchain, or -c populate_sdk) Jan 14 19:04:14 kergoth: sdk - that sounds great. I suppose there's already documentation about generating sdks..? is it trivial? Jan 14 19:04:45 kergoth: i mean to generate sdk if you already have a working image + kernel Jan 14 19:15:08 bitbake -c populate_sdk will create an sdk whose contents mirror what went into theimage Jan 14 19:15:13 but yes, it's in the yocto project documentaiton Jan 14 22:58:36 hello. are there any simpler alternatives to yocto? Jan 14 23:02:00 i've started with angstrom but nothing that i've built was actually able to run on real hardware Jan 14 23:02:41 now i'm trying poky, maybe that'll actually run :) Jan 14 23:18:03 dRbiG: i'm literally going to bed but you might want to expand on "unable to run on hardware" Jan 14 23:18:10 like, what the hardware was, what the errors were Jan 14 23:18:24 we're all running angstrom/poky/whatever on real hardware, so it does work :) Jan 14 23:19:20 hanged on 'Waiting for removable media', or couldn't mount root, genericx86 Jan 14 23:19:35 rburton: it's late here too, no rush Jan 14 23:20:43 there's a known way to trigger that bug - building a sysvinit image and then a systemd image without deleting the tmp/ first Jan 14 23:20:50 if you change distro features, always best to wipe out tmp Jan 14 23:21:04 rburton: that sounds like it might have been it Jan 14 23:21:09 hmm Jan 14 23:22:57 i'll leave the poky build running overnight and check tomorrow Jan 14 23:23:10 what hardware? Jan 14 23:23:34 Crofton|work: target? old via c3 board Jan 14 23:23:55 do you have your MACHINE and tune set properly for such an old system? Jan 14 23:24:55 now - hopefully; i'm using genericx86 + self-configured kernel Jan 14 23:25:05 if i've done the latter part right Jan 14 23:25:18 most likely the compiler is defaulting to core2.. which means it won't work.. Jan 14 23:25:24 you need your DEFAULTTUNE set to i586 most likely Jan 14 23:25:41 mhm, another useful bit, noted down Jan 14 23:26:05 this is a very old part that you need to make sure the compiler is set to match.. otherwise you'll get binaries with instructions that are incorrect for your processor Jan 14 23:26:34 there is a 'tune-c3' as well. Jan 14 23:26:42 TARGET_SYS = "i586-poky-linux"; TUNE_FEATURES = "m32 core2" Jan 14 23:26:52 there ya go.. thats defaulting to core2 Jan 14 23:26:55 it won't work Jan 14 23:27:23 pretty much all modern CPUs (Intel and AMD) can handle core2 instructions.. the c3 is much older Jan 14 23:27:35 you will likely need to generate a custom machine confgiuration file.. Jan 14 23:27:47 ^C it is then. Jan 14 23:27:49 (look at meta/conf/machine) Jan 14 23:27:58 or whatever BSP layers you are using.. Jan 14 23:28:24 the key is in the machine '.conf' file you need to change the tune 'require' line to be something like: require conf/machine/include/tune-c3.inc Jan 14 23:28:41 well i'm confused as to how the layers are supposed to work (order for starters) Jan 14 23:28:46 but that's another topic Jan 14 23:28:53 and then set: DEFAULTTUNE ?= "c3" Jan 14 23:29:23 layers are a mechanism to provide new recipes, configurtion files or augment existing components of other layers.. Jan 14 23:29:30 there is a load order established int he conf/layers.conf file Jan 14 23:29:55 a BSP layer will provide (at a minimum) a board support package, machine configuration file for a specific board Jan 14 23:29:58 oh, ok. that makes sense Jan 14 23:30:14 it often provides additional recipes specific to that board, possible compiler (tune) settings, etc.. Jan 14 23:30:20 i have a tune-c3.inc Jan 14 23:30:24 for the Via C3, there is an existing tune the 'tune-c3.inc' Jan 14 23:30:30 indeed Jan 14 23:30:43 that needs to to be required by your amchine .conf file, and you also need the 'DEFAULTTUNE ?= "c3"' line.. Jan 14 23:30:57 (many of the tune files add that line for you automatically, but the tune-c3.inc file does not for some reason) Jan 14 23:31:17 with those changes, after your parse you should see that it's building for the c3 processor core Jan 14 23:31:50 yeah, the overall infrastructure makes sense. the details are the devil Jan 14 23:35:17 last general question: what would be the main difference between ansgtrom systemd-image and yocto's core-image-minimal? Jan 14 23:35:31 except the init system Jan 14 23:35:47 different systems.. Jan 14 23:36:10 Poky (yocto projects default distribution settings) and different from angstrom.. (not sure how, but they were different in the past) Jan 14 23:36:30 core-image-minimal is a small busybox based system.. just enough components to create a small embedded systems.. Jan 14 23:36:35 the minimal has to do with command line support.. Jan 14 23:36:53 (I don't know what the systemd-image has)... Jan 14 23:36:58 afair the angstrom systemd-image that i managed to run inside qemu was also busyboxed based Jan 14 23:37:18 you can run core-image-minimal with systemd as well.. I've not done it -- but it should be possible.. Jan 14 23:37:30 choice of initscript system is a distribution confgiuration option.. either sysvinit or systemd Jan 14 23:38:22 aye. ok, thank you for the guidance Jan 14 23:38:31 no problem Jan 14 23:38:36 i'll put it into use tomorrow Jan 14 23:38:45 goodnight all :) Jan 15 01:43:55 Is there a way to override the BBFILE_PRIORITY_ of a layer without actually modifying the layer? Jan 15 01:44:13 set it in the local.conf? Jan 15 01:45:05 D'oh. I tried bblayers.conf... didn't think to try local.conf. Lemmesee. Jan 15 01:46:06 bblayers.conf is loaded BEFORE the layers.. local.conf is loaded after Jan 15 01:51:32 fray: works, thanks! Jan 15 01:53:12 no problem **** ENDING LOGGING AT Thu Jan 15 02:59:59 2015