**** BEGIN LOGGING AT Mon Dec 13 02:59:56 2021 Dec 13 09:46:39 hey Dec 13 09:46:56 is there a kernel option to not reset gpio state at linux boot ? Dec 13 09:52:53 Daulity: the kernel does not reset any gpio unless explicitly requested to by a driver Dec 13 09:55:05 though by default if cape-universal is enabled then a gpio-of-helper device gets set up which configures all gpios as input... though that is normally the state they are in anyway after reset Dec 13 09:56:20 Daulity: why? are you setting gpios in u-boot? Dec 13 10:04:53 yes u-boot sets a few gpio's before it boots the kernel they get reset to a certain state not certain if u-boot or linux kernel Dec 13 10:04:57 was just wondering Dec 13 10:06:37 the annoying bit is that this isn't really fixable by applying an overlay on top of cape-universal due the the limitations of overlays and the fact that status="disabled"; doesn't work on individual gpios of a gpio-of-helper device node Dec 13 10:07:10 so your options are to modify the cape-universal overlay or disable cape-universal entirely and use an overlay to declare/export gpios (with initialization of your choice) Dec 13 10:07:23 i see Dec 13 10:07:53 (or fix the gpio-of-helper drivers to respect the status property of individual gpios... which is probably a 2-line patch) Dec 13 10:07:58 *driver Dec 13 10:12:35 interesting, if CONFIG_OF_KOBJ=n then nodes with non-okay status property don't even get deserialized, however in practice CONFIG_OF_KOBJ is always y (specifically, it is only n in kernels that lack sysfs support) Dec 13 10:14:41 yeah it's definitely a 2-line fix Dec 13 10:16:28 https://pastebin.com/f8V8pz1V Dec 13 10:17:15 thanks :) Dec 13 10:19:03 rcn-ee: can you include that patch? that way overlays can disable cape-universal's gpio export for individual gpios used by the overlay Dec 13 10:20:54 e.g. &ocp { cape-universal { P9_14 { status = "disabled"; }; }; }; Dec 13 10:23:42 Daulity: you can use that in an overlay and then if you still want the gpio exported you can just declare your own gpio-of-helper ... unfortunately it doesn't support exporting a gpio without initializing it, but at least you can choose *how* to initialize it (input, output-low, output-high) and whether or not linux userspace is allowed to change the direction of the gpio Dec 13 14:56:40 zmatt, how far back, should i do everything? (v4.14.x -> mainline). ;) Dec 13 16:16:22 rcn-ee: if you include it e.g. in a 4.19 kernel I'm willing to verify it actually works (although I have no idea how it could possibly _not_ work) and then I'd say make it part of the patch that adds gpio-of-helper ... it should probably have been part of it from the start Dec 13 16:16:58 not being able to disable cape-universal gpios is presumably also why you did the hack to try to disable gpio conflict detection Dec 13 16:19:26 i assume it would work for every version.. Dec 13 16:19:40 indeed Dec 13 16:21:58 of_device_is_available(node) returns true if the "status" property of the node is missing, "okay", or "ok", and has been around with that exact functionality since ancient prehistory... like kernel 2.6 or something Dec 13 21:15:08 zmatt: are you aware of python having some overhead if you were to do a save? Dec 13 22:39:21 if python is locked to 1 cpu how would threading help Dec 13 22:39:31 stuck between a rock and a hard place Dec 13 22:39:43 I need to save data while doing other stuff Dec 13 22:39:51 this is just something that python is not very good at Dec 13 22:40:16 if I switched to C or C++ would i see a benefit if I had 1 thread dedicated to saving data Dec 13 23:07:26 mattb0ne: for logging data to eMMC/SD you mean? is that necessary, as opposed to sending data to a client system (not the BBB) and saving it there? Dec 13 23:08:06 regardless, if you're getting blocked on I/O then using a separate thread in python will indeed help, in the sense of allowing other stuff to get done while the writer thread is blocked on I/O Dec 13 23:09:09 while python only allows a single thread at any given time to be executing python code, a thread that's blocked in I/O does not count as "executing python code" Dec 13 23:09:45 so while that worker thread is blocked in I/O, your main thread can do other stuff Dec 13 23:11:14 mattb0ne: also, everything on the bbb is "locked to 1 cpu" since it only has one :P Dec 13 23:33:02 yeah I am talking about computer side Dec 13 23:34:47 hmm, it's having trouble writing a few dozen KB/s ? Dec 13 23:53:55 no i have images, so I am having to write ideally 15MB at 30 fps Dec 13 23:54:09 so I have an enterprise drive that can handle the throughput Dec 13 23:54:27 using DD I can write sequentially at 1.2GB/s Dec 13 23:54:49 but saving data to disk and running a gui is too much for a single python thread Dec 13 23:55:47 where is the data coming from? Dec 13 23:55:59 I'd be more concerned with how the data is being handled in python Dec 13 23:56:27 although yes, you may indeed also need a writer thread, but I don't know if that will fix the problem Dec 13 23:57:43 coming from a area scan camera Dec 13 23:57:59 so i have these big numpy arrays that I save as npy files for speed Dec 13 23:58:01 but they are big Dec 13 23:58:31 tiff have some sort of conversion and are saved at half the speed but the file size is much smaller Dec 13 23:58:55 i was going to whip up a python program that just makes large randome matrix and save to disk to see how fast I can go Dec 14 00:00:03 an npy file is the raw data of your array with a tiny header Dec 14 00:00:24 if the npy file is larger than you expect, that's probably due to whatever data format your array has Dec 14 00:01:11 hmmm Dec 14 00:01:26 so my image is like 4024x3036 Dec 14 00:01:43 and I thjnk it is 16 bit Dec 14 00:02:10 its monocrhome but I should look at maybe forcing it to int to save space Dec 14 00:02:23 ?? Dec 14 00:02:39 "monochrome" is not a datatype Dec 14 00:03:17 if it's 16-bit then 4024x3036 pixels will be 23.3 MB Dec 14 00:04:06 i crop to save space Dec 14 00:04:19 so my size will vary but it will be still large Dec 14 00:04:32 16bit would explain the size as a pure numpy array Dec 14 00:04:43 width * height * 2 bytes if you're using 16 bits per pixel Dec 14 00:04:44 monochrome means i just have 2D Dec 14 00:05:00 if it were color I would have 3 times the size Dec 14 00:05:01 lol Dec 14 00:05:48 not if it's still 16 bits per _pixel_ :P Dec 14 00:06:24 if it's 16-bits per channel per pixel then yes it would be Dec 14 00:06:56 anyway, tiff is a container format so that says absolutely nothing about the resulting size Dec 14 00:08:33 tiff can contain raw images, RLE compressed images (like GIF or PNG), JPEG, and a bunch more Dec 14 00:08:49 did not know that Dec 14 00:11:09 do you think if I ported to C or C++ i could get an improvement in performance Dec 14 00:13:06 I cannot possibly opine on what might improve performance without a determination of what it is you're spending your time on Dec 14 00:14:38 like, python would be slow if you were to do actual per-pixel calculations or process in it, but that's why you use libraries like numpy that already do the hard work in optimized C code Dec 14 00:15:44 if python is solely pass big blobs of data from one optimized C function to the next, as it should be, then absolutely nothing will be gained from reimplementation in C/C++, other than many bugs probably Dec 14 00:15:50 *solely passing Dec 14 00:19:44 hmmmm people are back on IRC?! Dec 14 00:20:04 ds2: ehh, people have never been gone from irc? Dec 14 00:20:24 thought everyone left for slack? Dec 14 00:20:42 lol what, no? Dec 14 00:21:19 *shrug* seemed that way... along with discordia Dec 14 00:21:20 I have an account there but I'm never there since slack is annoying Dec 14 00:21:35 slack is beyond annoying Dec 14 00:22:05 especially two accounts, since you can't have both open in one tab and I refuse to open up two slack tabs Dec 14 00:22:40 the attempts at controlling content is just unacceptable Dec 14 00:22:46 well moving to C or something i could get true multithreading Dec 14 00:22:53 and just have a thread that just saves Dec 14 00:22:57 is what I am thinking Dec 14 00:23:04 mattb0ne: that will work equally well in python Dec 14 00:23:09 for the reason I explained above Dec 14 00:23:11 btw - has anyone used the battery charger on the pocket? Dec 14 00:23:26 but you said python only has 1 thread Dec 14 00:23:34 mattb0ne: go scroll back and read what I said Dec 14 00:24:12 ds2: I mean, there's still the same hazard with using a battery Dec 14 00:24:32 I am passing a blob of data from numpy to a disk Dec 14 00:24:41 ds2: the 3.3v available on the headers and used for e.g. SD card is the one from the ldo, which remains permanently enabled if on battery Dec 14 00:24:46 yes but are things properly brought out like on the stock bone? Dec 14 00:24:58 but I dont see how that cannot be improved by seperating the gui management vs the IO to disk Dec 14 00:25:10 it is? blah Dec 14 00:25:22 I can see the plans for a quick fun project btwn xmas and new years flying out the window :( Dec 14 00:25:29 ds2: and you can't patch the pocket like you can the bbb Dec 14 00:25:39 well, you can still use the battery... provided you never shut down ;) Dec 14 00:26:03 is the SD the only thing on that LDO? Dec 14 00:26:12 no, I need shutdown Dec 14 00:26:20 as the sleep mode on the am33x sucks Dec 14 00:26:44 I guess I can price out some cheap pfets Dec 14 00:27:06 mattb0ne: I don't understand what you're saying. shoving your bigass file writes into one or more worker threads will almost certainly help a lot Dec 14 00:28:03 right but only for C since for python I would still be blocking while the write executres Dec 14 00:28:16 right? Dec 14 00:28:22 mattb0ne: again, scroll back and read my explanation of threading in python so I don't have to repeat myself Dec 14 00:28:48 ok Dec 14 00:29:32 so based on all your statements I think it makes sense for me to port to C so I can have a worker that just handles writes to the disk Dec 14 00:29:39 mattb0ne: https://libera.irclog.whitequark.org/beagle/2021-12-13 at 23:08-23:09 Dec 14 00:29:51 mattb0ne: then you understood nothing I said Dec 14 00:30:31 what you will most likely get by doing that is more crashes, not more performance :P Dec 14 00:30:32 i am talking about my computer Dec 14 00:30:35 not the beaglebone Dec 14 00:30:42 so I am not locked Dec 14 00:30:44 i got 12 Dec 14 00:31:04 my explanation of threading in python is platform-independent Dec 14 00:31:20 so while that worker thread is blocked in I/O, your main thread can do other stuff Dec 14 00:31:44 that is the key statement that I guess I am misunderstanding Dec 14 00:32:04 python is smart enough to do something else while waiting for the I/O to clear Dec 14 00:32:16 i thought unless you do something specific it would be blocking Dec 14 00:32:21 that has nothing to do with python being smart, that's how threads work Dec 14 00:32:22 I do not have threads in my implementation Dec 14 00:32:31 just asnycio Dec 14 00:32:59 right, which won't help when dealing with large file I/O which is unfortunately always blocking Dec 14 00:33:07 hence, use worker threads Dec 14 00:33:13 aha Dec 14 00:33:41 so I need to make a worker thread in python and see how that goes Dec 14 00:33:49 for the I/O Dec 14 00:34:01 maybe that would have been better than all this asnycio stuff Dec 14 00:34:12 did not really by me much other than allowing the gui to update Dec 14 00:34:18 they're solving different problems Dec 14 00:35:23 so the GLI does the thread switching Dec 14 00:35:31 all for me Dec 14 00:35:50 and asyncio makes it pretty easy to execute stuff in a thread pool and get asyncio futures for them that you can await if you want to know when those executions have completed Dec 14 00:36:21 the OS does thread switching (keeping in mind threads may also run at the same time on separate cores) Dec 14 00:37:58 the GIL prevents two threads from running python code at the same time, so e.g. when that big file write completes it will want to return to python code, and at that point it'll have to wait for its turn to be allowed to do so Dec 14 00:41:20 but bear with me, just to clarify while the big fat right is going GIL will allow the main thread to proceed and eventually let the worker thread back to execute more code Dec 14 00:41:30 big fat write* Dec 14 00:42:07 while the big fat write is going, that thread will release the GIL Dec 14 00:43:29 so writing to disk is basically outside of python so that allows the GIL to be released Dec 14 00:44:06 since python is calling a c function anyway or it is the domain of the OS Dec 14 00:45:20 it's up to the individual functions (implemented in C) to release the GIL when appropriate Dec 14 00:45:50 e.g. numpy will definitely drop the GIL while doing a big calculation Dec 14 00:47:17 ok I will read up on threads then Dec 14 00:47:35 so what would be a case where threads do not make sense and asyncio would be the better approach Dec 14 00:47:59 is really for big task that you would move to a thread? Dec 14 00:48:06 asyncio is almost always the better choice Dec 14 00:48:26 asyncio is for event handling Dec 14 00:48:51 it is just not helpful in this context because i just have a big fat event Dec 14 00:48:53 nothing it can do Dec 14 00:48:57 i got it now Dec 14 00:49:28 right, this isn't a problem where asyncio can help you Dec 14 00:50:33 though like I said, it does offer integration via loop.run_in_executor which can run code in a thread pool and give you an asyncio future for its completion Dec 14 00:51:28 i.e. the thread pool would handle the file save, asyncio would deal with the completion event of the file save Dec 14 00:52:10 I will take a look Dec 14 00:52:17 need to read up on it Dec 14 00:52:29 i am just starting to get my arms around event loops and stuff Dec 14 00:52:30 in general, whatever you do in the worker thread should just use the data that was passed to it and not mess with any objects or global state Dec 14 00:52:48 ok Dec 14 00:56:38 I _think_ something like this... https://pastebin.com/Ka0jH4xd Dec 14 00:58:35 there's also asyncio.to_thread() which is even simpler, but it looks like it spawns a new thread each time which would be undesirable (of if it doesn't then it's not clear how to influence the number of worker threads) .. also it might be too new, introduced in python 3.9 Dec 14 01:01:25 ah it uses a single worker Dec 14 01:02:54 sorry, thread pool Dec 14 01:02:57 a default thread pool Dec 14 01:03:04 I mean, that seems like it could be fine Dec 14 01:04:20 mattb0ne: so this would be the simplified version (without requiring python 3.9): https://pastebin.com/3YCXU02t Dec 14 01:05:48 better comments: https://pastebin.com/5jCPq91u **** ENDING LOGGING AT Tue Dec 14 02:59:56 2021