**** BEGIN LOGGING AT Tue Jun 05 03:00:15 2018 Jun 05 04:31:19 Hi All, Even though my BB_NUMBER_THREADS is set as 8/16, Most of the time number of paralled threads running is less than 3/4. Jun 05 04:31:27 Can someone please help me with this? Jun 05 08:56:34 vishnu_nk, due to dependencies between task there might simply not be enough tasks to parallelize. There are some bottle necks concerning parallelization, such as toolchain build, kernel build, etc. Jun 05 08:59:14 ejoerns, Is there any better way to improve this parallelization and reduce dependencies? Jun 05 09:03:47 vishnu_nk, afaik not really, all target tools depend on the target toolchain e.g.. Thus unless you do not use any pre-compiled one, I fear not. Use sstate to seepd up subsequent builds ;) Jun 05 09:23:30 Using SSTATE CACHE Jun 05 09:23:39 Also using prebuilt binaries for toolchains Jun 05 09:23:53 But still the build time seems to be unaffected Jun 05 09:24:44 There must be some way to debug why paralled threads running is <4 even though BB_NUMBER_THREADS is set as 8 Jun 05 09:35:24 vishnu_nk, only sometimes or *always* <4? Jun 05 09:35:34 Sometimes. Jun 05 09:36:24 Sometimes runs 1 task only or 2/3. But more than 4 occurs very very rarely Jun 05 09:47:35 vishnu_nk, mhh... fetchall task should be a proper check if task parallelization potentially works... Jun 05 09:58:38 vishnu_nk: when it's ony runninng one task you can bet its because its a bottleneck Jun 05 09:59:06 easy to get a build sitting on say gettext-native: that 1) is a core dependency and 2) takes forever to configure Jun 05 09:59:37 removing the need to build a target gettext early in the build saved a lot of time Jun 05 10:05:13 @ejoerns, I tried running bitbake -f -c fetchall before trying build Jun 05 10:05:46 @rburton, Are you suggesting to use prebuilts for gettext-native? Jun 05 10:06:04 no, i'm saying that occasional drops down to one task is usual and expected Jun 05 10:06:30 But most of the only 1/2 tasks are executed Jun 05 10:06:39 *time Jun 05 10:06:42 if you inherit buildstats, you get a timechart of all tasks so you can see what is running in parallel and what is not Jun 05 10:06:56 well when i do a build, i get 20 at once unless there's a bottleneck Jun 05 10:07:07 Already done that. But not able to generate the chart. Jun 05 10:07:36 pybootchartgui in scripts Jun 05 10:07:37 Parse error: empty state: 'Yocto2.1/buildstats/20180604141357' does not contain a valid bootchart Jun 05 10:08:08 I copied the buildstats in a VM and installed pybootchartgui, Jun 05 10:08:14 But still this error occurs Jun 05 10:08:27 use the fork we ship, not the upstream one Jun 05 10:09:11 from poky module itself? Jun 05 10:09:25 yes, in scripts/pybootchartgui Jun 05 10:10:16 also as per https://wiki.yoctoproject.org/wiki/Releases 2.1 is two years old now, so consider upgrading Jun 05 10:10:41 it's not supported anymore for security fixes Jun 05 10:11:00 Ok Jun 05 10:11:12 I will try to get the chart from the module itself? Jun 05 10:22:31 While setting the BB_NUMBER_THREADS to 16, Number of tasks running parallely has increased to > 7. Jun 05 10:22:58 Any idea why this happens? Jun 05 10:23:32 without seeing your exact configuration and buildstats, its impossible to say Jun 05 10:23:49 BB_NUMBER_THREADS is a limit, not a target Jun 05 10:23:57 if it can't run 16, it won't Jun 05 10:24:48 Does that mean that even if I set the variable greater than the cpu count, it will still work? Jun 05 10:25:12 yes, but you'll just risk slowing the build *down* because its context switching too much Jun 05 10:25:19 which is why recent releases just take the core count Jun 05 10:27:00 I saw some poeple saying that the build perform best when the BB_NUMBER_THREADS is set to 8/16 or to a limit. Is it true? Setting it to higher value is only going to reduce the build time/ Jun 05 10:27:00 ? Jun 05 10:39:09 just set it to the number of cores you have, unless you have >40 Jun 05 10:39:37 I have 32 cores Jun 05 10:40:25 the only person i know who has to set num threads to less than core count is someone with 80 cores, where a fetchall can DoS the machine Jun 05 10:41:24 But similar build when performed with Yocto1.6 seems to be perfect. Parallel threads seems to be consistent. Jun 05 10:41:58 These bottlenecks can only be identified using pybootchart? I'm trying my level best to get a chart Jun 05 10:42:52 pybootchart makes it trivial because you can look at the chart Jun 05 10:43:27 Yeah, The data seems to be very big and will be very hard to analyse individual files. Jun 05 10:43:37 rburton, one drawback of the OE parallelism handling is that there is no common pool of parallelization between inner (-j) and outer (task) parallelism. Thus it often kills the machine responsiveness when running a build and suddenly all tasks decide to be a do_compile now... :/ Jun 05 10:43:37 Is there any altrenative for pybootchart?. Jun 05 10:43:39 build with buildstats, point pybootchartgui at the directory, look for big steps in the chart data Jun 05 10:44:00 vishnu_nk: if it doesn't work, tell us why Jun 05 10:44:10 i'm literally looking at pybootchartgui output now Jun 05 10:44:27 ejoerns: iirc there's some prototype make server support somewhere Jun 05 10:45:34 Yes. Will generate a chart first. Jun 05 10:45:55 rburton, ouh, something like that could really be interesting ;) Jun 05 10:48:39 ejoerns: https://www.gnu.org/software/make/manual/html_node/Job-Slots.html#Job-Slots Jun 05 10:49:43 Just one doubt, After getting build_stats data, If I'm making some changes in the recipe files, Do I need to delete the whole tmp and sstate folder for the exact timings? Or Just a re bibtake -P will do? Jun 05 10:50:22 vishnu_nk: well the second run will only build what needs to be built, so you'll need to take that into account Jun 05 10:50:39 ideally, do a build with a fresh sstate to get a full rebuild Jun 05 10:51:13 Ok. Will do it :) Jun 05 10:51:17 if anyone understands how to analyse time-based data it shouldn't be that hard to write a script to read the buildstats and find instances where only one task is running to identify bottlenecks Jun 05 11:06:44 RP, who is the FORTRAN expert? Jun 05 11:09:01 nobody? Jun 05 11:14:17 * Crofton|work slaps rburton Jun 05 11:16:49 Crofton|work: RP isn't here, please leave a message ;-) Jun 05 11:32:32 lol Jun 05 11:32:34 damn it Jun 05 11:33:08 | /home/balister/opensdr/sdr-build-master-qemu/build-pi/tmp-glibc/work/core2-64-oe-linux/lapack/3.7.0-r0/recipe-sysroot-native/usr/bin/x86_64-oe-linux/x86_64-oe-linux-gfortran -m64 -march=core2 -mtune=core2 -msse3 -mfpmath=sse --sysroot=/home/balister/opensdr/sdr-build-master-qemu/build-pi/tmp-glibc/work/core2-64-oe-linux/lapack/3.7.0-r0/recipe-sysroot -Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed CMakeFiles/cmTC_1c4bc.di Jun 05 11:33:08 r/testFortranCompiler.f.o -o cmTC_1c4bc Jun 05 11:33:08 | x86_64-oe-linux-gfortran: error: libgfortran.spec: No such file or directory Jun 05 11:33:20 no sign of fiel in tmp Jun 05 11:33:32 same name .in is there Jun 05 12:16:39 Crofton|work: no idea, sorry Jun 05 12:35:06 RP, seems like something from gcc update? Maybe khem has a clue? Jun 05 12:38:22 Crofton|work: which gcc update? That went in a while ago. Could be if you've not used it since then though. This is why we need more regression tests Jun 05 12:39:28 This is FORTRAMN :) Jun 05 12:39:47 let me try backing up some Jun 05 12:40:06 I am geting sucked in to the scipy vortex again Jun 05 12:42:40 Ohh which Fortran.. 77, 90, .... Object Oriented Fortran.. Jun 05 12:42:55 I just want to know how much I have to make fun of you now Jun 05 12:44:47 * Crofton|work slaps fray Jun 05 12:44:59 trying to get a lapack recipe (that has worked) building on master Jun 05 12:45:10 :) Jun 05 12:45:18 I hate you all Jun 05 12:46:01 most I've done w/ Fortran and OE is to build the module in the GCC recipe.. :P Jun 05 12:46:34 surely you have a customer who cares? Jun 05 12:47:06 I think in the last 12 years, I've had two requests for Fortran.. one was from a research lab.. and the other from a company doing some automation work on an assembly line Jun 05 12:47:22 the research lab 'fixed it themselves'. I never heard what the other one did Jun 05 12:47:37 (lab one was BEFORE OE had a Fortran switch in GCC..) Jun 05 13:13:49 ok checking rocko now Jun 05 13:38:05 same problem on rocko, don't have any easy way to back up further Jun 05 16:17:50 Crofton|work: can you search for this spec file in recipe sysroot Jun 05 16:19:38 RP: what should we do about gcc8, at this time its ready Jun 05 16:25:03 khem: Sorry, I've been meaning to reply on list. The qemuppc change will break the SDKs , they will either support SPE or not support SPE but they rely on being able to target both :( Jun 05 16:25:21 khem: the gcc-runtime thing also worries me, I think we need to fix the tunes Jun 05 16:25:42 khem, Jun 05 16:25:43 [balister@prague build-pi]$ find ./tmp-glibc/ -name "libgfortran.spec*" Jun 05 16:25:43 ./tmp-glibc/work-shared/gcc-7.3.0-r0/gcc-7.3.0/libgfortran/libgfortran.spec.in Jun 05 16:25:43 [balister@prague build-pi]$ Jun 05 16:26:16 spe backend has been seprated out in gcc8 so unless we want to head butt with gcc upstream and recombine them your first concern can not be addressed Jun 05 16:26:48 gcc-runtime change is no different than what we do in other package e.g. valgrind Jun 05 16:27:12 while I agree the tunes should be addressed in a wholesome way Jun 05 16:27:58 w.r.t. SPE we just have to take a stance on SPE Jun 05 16:29:00 I was talking to Jim Wilson a gcc maintainer yesterday and he clearly mentioned that SPE will go away if no one signs up to maintain it in gcc and it will be maintained as a separate backend even if someone signs up Jun 05 16:48:23 khem: so we just say that nativesdk-gcc-powerpc is randomly broken? Jun 05 16:49:04 khem: We need to do something, be it error or build two toolchains or something to handle the spe problem. Pretending it works when it doesn't is just going to come back and bite us Jun 05 16:49:27 khem: I can likely fix it if I spend some time on it Jun 05 16:49:30 no, we fix it if it has build problems, but we clearly say no to SPE Jun 05 16:49:50 khem: you can easily end up building an SDK which won't work, that is the issue Jun 05 16:50:15 if we say we dont Jun 05 16:50:20 * RP -> afk Jun 05 16:51:56 if someone uses -mspe or other spe options main ppc backend already throws an error, that should be enough Jun 05 16:52:23 I think we have a patch to combine then which we can throw away Jun 05 16:52:33 khem: so spe builds are already broken in gcc8 ? Jun 05 16:53:08 no, they are not, we need to build for *-*-*spe as target tuple Jun 05 16:53:31 khem: the problem is currently one cross toolchain is meant to produce binaries for both. You changed the config option but you didn't unlock that single toolchain issue Jun 05 16:53:41 however I have not built it myself so cant attest if that works Jun 05 16:54:06 thats where I am saying that we drop spe Jun 05 16:54:14 khem: I can tell from the patches that building pcc spe then a non spe will break, as will an SDK trying to target spe and on spe Jun 05 16:54:21 we will support altivec Jun 05 16:54:23 if we're dropping spe we need to be clear about that Jun 05 16:54:25 only Jun 05 16:54:35 * RP -> really goje Jun 05 16:54:51 yeah thats the change I have proposed for qemu **** ENDING LOGGING AT Wed Jun 06 03:00:10 2018