**** BEGIN LOGGING AT Thu Oct 22 10:59:57 2020 Oct 22 13:05:12 Based on this: https://github.com/beagleboard/linux/blob/5ff7d8356e91ff8516c0bd37710829a7556b6790/drivers/usb/typec/tcpm/tcpm.c#L725 I think i found how type c is supposed to select a voltage, but i still am not sure how to add said regulator. I see that the fusb302 can declare a voltage regulator domain sourced from a regulator driver, but it seems it can just turn on charge or vbus, not set limits/voltages Oct 22 14:02:38 m Oct 22 15:27:39 figured it out!!!! Oct 22 15:28:39 zmatt: why would doevents() cause py-uio to bug out? Oct 22 15:29:34 I need to move some stuff to another thread as a workaround Oct 22 15:31:40 what's doevents() ? Oct 22 15:34:03 py-uio has no functions that block on I/O apart from irq_recv() which also has two asynchronous examples, and you're not using irqs anyway, so I have no idea why you would ever want to use threads with py-uio Oct 22 16:02:58 I start a loop and I need the break it based on the user hitting a button Oct 22 16:03:12 doevents was one way of doing it Oct 22 16:04:19 ...... Oct 22 16:04:52 you've made a polling loop that continuously uses 100% cpu load and instead of replacing that by a timer you're sticking it into a thread? *sigh* Oct 22 16:04:57 https://www.riverbankcomputing.com/static/Docs/PyQt4/qcoreapplication.html Oct 22 16:05:39 well I am not sure how long the user needs the polling loop to run Oct 22 16:06:04 and I need to update the gui Oct 22 16:06:19 so I need to take data as it comes in Oct 22 16:06:40 so kinda need to Oct 22 16:06:43 right, so use a timer set at some reasonable update rate and have it fetch data from pru and update the GUI Oct 22 16:08:02 but how could I have the user stop it? it needs to react toa button press Oct 22 16:08:03 instead of using a thread (in itself already a bad idea) that uses 100% cpu load to fetch values like crazy from pru and then discard 99.99% of those updates Oct 22 16:08:16 Automagic. Ugh. Do what zmatt says and know how to make it work. Oct 22 16:08:17 ... so stop the timer when button is pressed? Oct 22 16:08:28 I don't quite see the problem here Oct 22 16:08:45 maybe I am overcomplicating it Oct 22 16:09:02 I will work on it but I found the problem which is a start Oct 22 16:09:07 one thing since I have you Oct 22 16:09:08 Or put the timer/poll thing on a thread with a sleep so it doesn't do anything until it needs to, then exit the thread. Oct 22 16:09:25 I can definitely tell you, if you're using py-uio and think you need to use a thread because of it, you're horribly wrong. I avoid threads whenever possible Oct 22 16:09:38 Ragnorok_: what? no? Oct 22 16:09:43 that is an idea Oct 22 16:09:50 he's using qt, it has timers Oct 22 16:09:54 I use threads all the time, but that's in C/C++. I don't generally do threads in python. Oct 22 16:09:59 Ok. Oct 22 16:10:14 putting py-uio on the ai Oct 22 16:10:34 that ai specific step with the dtsi file Oct 22 16:10:35 I generally avoid frameworks like Qt because thier automagic ALWAYS winds up biting me in the balls. Oct 22 16:10:54 Ragnorok_: lol Oct 22 16:11:01 MattB0ne: yeah, on the AI you currently need to customize your dts if you want to do just about anything, so I'm assuming you already have that since the py-uio README is not a place for a tutorial on dts Oct 22 16:12:49 do I need to add the dtbo file to my uEnv.txt Oct 22 16:13:12 that is what I was planning Oct 22 16:14:41 well on the current AI images overlays are not enabled by default, and if you're using any pins at all then using an overlay is hopeless on the AI right now (because of the horrid cape_default_pins thing) Oct 22 16:15:25 but if you're not using any pins and have overlays enabled (can't remember if the u-boot that ships on the latest AI image supports them yet or not) then you could try compiling the dtsi into an overlay I guess Oct 22 16:21:02 ok is uio enabled by default? Oct 22 16:21:11 no Oct 22 16:57:19 one more quick one Oct 22 16:57:28 when I try and ssh to the bbai Oct 22 16:57:41 there is a conflict with the ras token of the bbb Oct 22 16:57:55 it says it is dangerous and I have to remove my key Oct 22 16:58:31 is there a way to seemlessly connect to different beagle bones Oct 22 17:00:59 I don't get that w/ different units, only when I do something drastic like reflash; then that one will complain and I delete its key. Oct 22 17:01:15 yeah, make ssh not think they are supposed to be the same device, which it does based on hostname (or ip if you connect by ip instead of by hostname) Oct 22 17:03:33 if that is too inconvenient you could even just use .ssh/config rules to give the two devices separate names for ssh even though they use the same hostname/ip: https://pastebin.com/sXJYqvk0 Oct 22 17:06:49 ok Oct 22 17:09:10 do now I would connect to debian@bbai Oct 22 17:09:36 just bbai suffices since the config block already sets "User debian" Oct 22 18:58:38 hey zmatt, question: fusb passes through vbus and charge enable/disable with this it seems: https://github.com/beagleboard/linux/blob/5ff7d8356e91ff8516c0bd37710829a7556b6790/drivers/usb/typec/tcpm/fusb302.c#L754 it passes those commands from the tcpm driver. Is the correct method of creating a supply to expand the fusb302 functions by also passing through tcpm's :https://github.com/beagleboard/linux/blob/5ff7d8356e91ff8516c0bd37710829a7556b67 Oct 22 18:58:38 90/drivers/usb/typec/tcpm/tcpm.c#L4581? Oct 22 18:59:21 hmm, that got cut somehow, *tcpm's:https://github.com/beagleboard/linux/blob/5ff7d8356e91ff8516c0bd37710829a7556b6790/drivers/usb/typec/tcpm/tcpm.c#L4581 Oct 22 20:05:25 Hello, i had a question regarding beagleboard detection. Oct 22 20:06:08 I have connected beagleboard with USB-C on my laptop and have connected it to the ethernet via my router. I am using kali virtual box to check if beag;eboard is detected, but it does not Oct 22 20:06:40 It shows the IP address but when I run a code to scan beagleabord it says, it did not scan any of beagleboards Oct 22 20:06:44 Please advice Oct 22 20:13:50 Hey zmatt: do you know if you can increase the maximum buffer size when using py-uio to setup a shared memory location with self.shmem = pruss.ddr.map(Shmem)? Oct 22 20:14:11 It seems that around 200kB is the maximum I can do right now. Oct 22 20:14:32 hunter what are you making Oct 22 20:14:48 it seems you have made good progress with the bbai Oct 22 20:15:42 It is a 8CH, 24-bit data acquisition system. It was made for the BBB, but I'm trying to get it working with the BBAI. Oct 22 20:17:09 I would like to write up what I've done to get the BBAI working, for other beginners like me. It has been much more difficult than I anticipated. Oct 22 20:18:05 I have been saving your dialogs with zmatt I will have to traverse the same lanf Oct 22 20:18:08 land Oct 22 20:18:22 hunter2018[m]: the size of the ddr memory region allocated by uio_pruss is configured via a module parameter Oct 22 20:18:30 so you can change it using a modprobe config file Oct 22 20:18:36 or via the kernel cmdline Oct 22 20:19:20 "extram_pool_sz" Oct 22 20:19:23 default is 256KB Oct 22 20:21:45 e.g. put "options uio_pruss extram_pool_sz=1048576" into a new /etc/modprobe.d/whatever.conf file and reboot Oct 22 20:22:21 the annoying part about this on the am572x is that there's no way to set it per pruss instance Oct 22 20:23:33 I've been meaning to explore different ways of allocating shared physical memory, e.g. use DT to reserve a fixed chunk of physical memory and then expose it to userspace using a separate uio device Oct 22 20:30:19 interesting, hunter2018: are you using an ADC with SPI communication to the PRU? (that's very similar to what I'm working on but with a much cheaper 10bit ADC) Oct 22 20:33:02 hunter2018[m]: You aren't having heat problems? Oct 22 20:33:20 Thanks! Oct 22 20:33:57 Yeah, I'm using SPI to the PRU and from PRU to a shared memory region with userspace. Oct 22 20:34:12 Also, I've got an AC computer fan pointed at it on my desk now. Oct 22 20:34:29 cool, are you using hardware spi0 on the bb and assemly in the PRU? Oct 22 20:35:09 zmatt: thanks! I'm going to try to increase it now. How large do you think it could be before it stops working? Oct 22 20:35:35 lol We have an audio capture appliance built with a B'Bone Black. We wanted to go AI but it overheated, so we didn't. (pout) I was looking forward to playing with it. Oct 22 20:35:38 mm302: No, the ADC has a SPI-like output, but with 4 data lines and one clock. Oct 22 20:36:03 The pru uses assembly to read out the data from the 4 lines and send it to memory. Oct 22 20:36:53 The BBB couldn't keep up at the fastest sampling rate (8CH, 24-bit, 16kHz) so I was hoping that the BBAI could. Oct 22 20:37:15 We could possibly make our code faster, but I thought switching to the BBAI might just fix it. Oct 22 20:38:59 With the current buffer size on the BBB, the python code we used had to read out the shared memory buffer every 0.3 seconds to prevent it overflowing. It would sometimes take longer than this. Oct 22 20:39:32 I see, so your code runs on the host not the PRU (the PRU could reach much higher sampling speed than 16kHz) Oct 22 20:40:48 I used C++ to read the memory. I would expect python to be slow. Oct 22 20:40:50 oh I see what you mean sorry, you are using assemly in the PRU to read the ADC and write it to memory and the host is running python to read that (likely to be the slow part) Oct 22 20:41:08 Yep! It displays data over a webservice Oct 22 20:42:02 cool project, I may ask your opinion when I'm done with my code Oct 22 20:42:43 We discussed using C++ for saving data to a SD card and python just for the web stuff. Oct 22 20:42:48 But, never got around to it. Oct 22 20:42:53 I only have 4 ch @ 16 bits and it uses a whopping 12% of the CPU for all channels. lol I didn't know, though, so while I did maximize speed, I kinda angled that way. (shrug) Oct 22 20:43:12 didn't max* Oct 22 20:43:42 mm302: sure, I can but I am not the most knowledgeable about this. Oct 22 20:44:05 usually those high resolution ADC are tiny (not DIP) and require a special custom board, did you both do that (electronics part) as well? Oct 22 20:44:39 My problem is that I'm an EE, so I did design custom hardware, but I'm out of my league when it comes to getting the software working :) Oct 22 20:45:01 Someone else did the hardware here. Oct 22 20:45:18 This is the ADC I used https://www.analog.com/en/products/ad7779.html#product-overview Oct 22 20:45:20 I'm an EE as well. I just do software instead of hardware. lol Oct 22 20:46:02 cool, that you are going through obstacles anyway, is this a DIY hobby project for you too or more like work? Oct 22 20:46:03 I do hardware as a hobby 'cause that itch gots to be scratched. Oct 22 20:46:38 Kind of more work for me, but I think it is pretty interesting Oct 22 20:47:17 hunter2018[m]: "The BBB couldn't keep up at the fastest sampling rate" .. in what sense? Oct 22 20:47:27 cool ADC board, actually easier to avoid designing a board yourself Oct 22 20:47:54 hunter2018[m]: like, it could easily keep up with that rate if you only need to capture it to ram, so I assume you mean subsequent processing? Oct 22 20:50:02 Yeah, we save the data as a HDF5 binary file to an SD card while also showing a visualization of the data over a web interface. Oct 22 20:51:04 The web interface just shows a down sampled version of the data. Oct 22 20:51:18 to be honest the ADC board you mentioned has a max sampling rate of only 16ksps Oct 22 20:52:03 Yeah, and that is what we run it at all the time. I made an option for the pru code to downsample and average so that you can get a lower data rate from the pru if necessary. Oct 22 20:53:24 hunter2018[m]: I mean, 8ch * 24-bit * 16 kHz is only 375 KiB/s, so I don't expect the sd card to be the problem, the interface to the SD card definitely won't be Oct 22 20:55:11 Yeah, I'm sure we are doing it in a kind of dumb way. We just have a flask app that serves up a display of the data, and another python thread that gets data from the shared ram as quickly as possible and saves it to the SD card. Oct 22 20:55:36 On the BBB, sometimes this "save data" thread would get held up and let the buffer overflow. Oct 22 20:56:22 I mean, yeah duh you're using python Oct 22 20:56:29 probably the garbage collector Oct 22 20:57:07 I imagine the best way would be have a c program that saves data to the SD card and then have a separate python script that reads some of this data from the file on the SD card and displays it. Oct 22 20:57:10 use a tiny C/C++ program for your data saving and consider giving that thread real-time priority Oct 22 20:58:23 But that would require me learning something new, why not just throw a faster beaglebone at the problem and hope it fixes it :p Oct 22 20:58:38 I doubt it will Oct 22 20:58:45 hahaha Oct 22 20:58:54 i like how hunter thinks! Oct 22 20:59:18 based on my results so far you are right zmatt, still having the same issues Oct 22 20:59:43 C++ is hardly difficult. Keep the slow python for the webby bits. Oct 22 21:00:24 exactly, just use C++ for the bits that are actually critical to the real-time processing Oct 22 21:00:44 Also, believe it or not, AI has become such a buzzword that we have customers who want use to use the BBAI. ONLY BECAUSE IT HAS THE WORD AI IN IT! Oct 22 21:00:44 haha then you are using 2 threads in python, good luck with speed Oct 22 21:00:53 it might also be a good place to do the downsampling Oct 22 21:01:06 yeah threading in python is mostly useless Oct 22 21:01:11 I kid you not. Oct 22 21:01:23 since only one thread at any given time can execute python code Oct 22 21:02:06 so it's kind of like cooperative multithreading where the yields are at blocking system calls Oct 22 21:02:26 So if I made the shared memory in python with py-uio like self.params = self.core.dram.map(Params) Oct 22 21:02:44 How could I then tell the C code where the buffer is? Oct 22 21:03:05 I mean, I think you can call C functions from python right? So just pass the address? Oct 22 21:03:07 to be honest writing a small web service in C to spit the data out could be a nice solution, but not trivial if you haven't used it before Oct 22 21:03:12 the C code would have to do a similar thing Oct 22 21:03:25 like, mapping it in python would not be of any use for your C program Oct 22 21:03:42 it just needs to open the uio device and mmap() the ddr memory region Oct 22 21:03:58 https://pastebin.com/sXj0p0Dz Oct 22 21:05:17 to make things as flexible and robust as possible you could have python pass the necessary information: 1. the path to the uio device, 2. the index of the memory region, 3. the offset within that memory region Oct 22 21:06:26 Cool! So what does the patebin you linked to do exactly? Oct 22 21:07:27 I'm not familiar with mmap() . It gets the virtual address of the shared PRU ram buffer? Oct 22 21:08:42 Uhm. I wrote code with that. One would think I could be helpful. (drewl) Oct 22 21:09:20 so in this case, fd would be a file descriptor for the uio device, e.g. int fd = open( path, O_RDWR | O_NONBLOCK | O_CLOEXEC ); Oct 22 21:09:40 the index is the index of the memory region, which in python you can find as attribute on the memory regionm e.g. pruss.ddr.index Oct 22 21:09:48 the path for the device is pruss.path Oct 22 21:11:13 how big is each memory region indexed? Oct 22 21:11:14 and unfortunately you can't pass an offset to mmap in this case (because uio abuses it to communicate the index), so you just have to map the whole thing or at least enough of it to cover the part you need Oct 22 21:11:34 mm302: that depends entirely on the situation Oct 22 21:11:43 you can get that information from sysfs, which is what py-uio does Oct 22 21:12:12 ha maybe it's a page size 4048 Oct 22 21:12:35 for the uio_pruss driver there are only two memory regions: one covering all of pruss and one for the shared ddr memory region, whose size depends on a module parameter (default 256K) Oct 22 21:12:43 Ok, so with this file descriptor you can do something like (read(fd, buff, sizeof(buff)) ? That is you can read from the file (which is the shared PRU buffer) and write it to another buffer? Oct 22 21:12:54 these have index 0 and 2 respectively, though it's better to find them by name (which is what py-uio does) rather than hardcoding indices Oct 22 21:12:57 I was looking at uio_mmap that you pasted Oct 22 21:13:02 Then you could save this buffer to the SD card? Oct 22 21:13:05 hunter2018[m]: no! Oct 22 21:13:20 hunter2018[m]: you have to use mmap() to access the shared memory Oct 22 21:14:10 read()/write() on the uio flie descriptor is used only for irq handling ( header: https://pastebin.com/w42Ti0Hz example: https://pastebin.com/x0sUKmKa ) Oct 22 21:15:09 (though this example doesn't quite apply to uio-pruss, which has some driver-specific peculiarities compared with the generic uio_pdrv_genirq driver) Oct 22 21:15:58 btw the shared ddr you're talking about, I guess is not the one obtained with prussdrv_map_prumem(PRUSS0_SHARED_DATARAM, address) (I think this is the 12kB that is shared between the two PRUs Oct 22 21:15:59 hold on lemme see if I can make a more sensible outline of how you'd do this... I've been wanting an example for that for a while already Oct 22 21:16:33 mm302: correct, that's just pruss.dram2 or core.shared_dram in py-uio Oct 22 21:17:07 (technically all three data rams in pruss are shared between the two cores, it's mostly just a convention to reserve dram0 for use by core0 and dram1 for use by core1 ) Oct 22 21:17:35 I'm having a look in prussdrv if I find a method to map that ddr shared ram you're talking about Oct 22 21:17:42 it's there Oct 22 21:18:15 prussdrv_map_extmem Oct 22 21:18:17 It has to be or I'd be sunk. lol Oct 22 21:18:43 but libprussdrv sucks anyway Oct 22 21:19:03 thank you zmatt! Oct 22 21:20:28 It may "suck" but it works. Perhaps today I'd use something else? (shrug) Oct 22 21:21:24 zmatt: BTW I tried increasing the shared memory to 2097152. Now I can make my buffer ~10x larger and it holds 62500 samples now. So instead of having to read the buffer every 0.39s I only have to read it every 3.9s. I think this alone will make it work. Oct 22 21:21:25 I wrote my own code for it :P making a proper replacement for libprussdrv is kinda still on my wish-list Oct 22 21:21:39 hehe, I had the same issue, could only get prussdrv to work (but I cannot complain as I didn't upgrade the OS) Oct 22 21:22:02 I don't even know what else is out there for C++. I wrapped libprussdrv in a class that abstracted the warts away and my code uses that. Oct 22 21:22:06 hunter2018[m]: note that you can query the actual size as pruss.ddr.size Oct 22 21:22:51 Ragnorok_: well the options in C/C++ are, afaik, use libprussdrv or roll your own... especially if you don't need interrupts the code is pretty trivial Oct 22 21:23:46 I didn't need interrupts in the end, but when I started down the path I didn't know that, so libprussdrv it was. Worst things could have happened. Oct 22 21:23:58 the main reason for me to not use libprussdrv is because it hardcodes the device paths, i.e. it assumes /dev/uio0 - /dev/uio7 are the eight devices (any of them serves for mmapping pruss, but you have one device per irq from pruss intc to cortex-a8) Oct 22 21:24:29 whereas I use uio devices for a bunch of things, so that hardcoded assumption is pretty bad Oct 22 21:24:48 basic question, this ddr shared memory, what's the start address from the point of view of the PRU? I'm looking at "global memory map" in PRU-ICSS reference, but cannot see it Oct 22 21:24:53 I only use uio for the PRUSS, so it works for me. Oct 22 21:25:10 Cool! pruss.ddr.size was what I set it to. Does it matter if you set to to a power of 2 or anything? Is it just arbitrary? Oct 22 21:25:10 mm302: it's just a chunk of ddr3 memory allocated by the driver, it's not at a fixed address Oct 22 21:25:23 mm302: so you need to pass the address to your pru program in some way Oct 22 21:25:41 hunter2018[m]: I think the only requirement is being a multiple of 4KB Oct 22 21:26:05 ha ok, do I need to convert the address from virtual (point of view of the host process) to physical for the PRU to get the right address? Oct 22 21:26:11 I've never used this DDR thing. I allocated a meg of RAM as a kernel parm and I put my ring buffers in there. Or that's my memory of how I did, which is suspect at best. Oct 22 21:26:22 mm302: you query the physical address of the memory region via sysfs Oct 22 21:27:27 Ragnorok_: yeah there's other ways, like I said you can reserve a hardcoded chunk of physical memory (e.g. via DT) and then use an uio_pdrv_genirq device to allow userspace to map it (or, if you're being gross, use /dev/mem) Oct 22 21:27:56 mm302: I came up a struct that contains PRU data and locked that to location zero in the PRU. I pass addresses and such there, and get status data back. Oct 22 21:28:16 actually maybe prussdrv_get_phys_addr does it Oct 22 21:30:43 Rognorok sounds good hacky way Oct 22 21:31:01 Thanks so much guys! Just the ability to change the buffer size has helped tremendously! I think I might try making a board with the AD7771 which is pin for pin compatible with the AD7779 but has up to a 128ksps sampling rate. https://www.analog.com/en/products/ad7771.html#product-overview Oct 22 21:31:25 Wrong link, https://www.analog.com/en/products/ad7779.html#product-overview Oct 22 21:31:50 But I hadn't tried this because I wasn't able to keep up with the 16ksps rate of the AD7771. Oct 22 21:32:03 yeah you can use prussdrv_get_phys_addr() on the result of prussdrv_map_extmem() .. which btw doesn't map anything, it just returns the pointer (it's mapped during initialization) Oct 22 21:32:25 But now, I can make the buffer bigger. And when that stops working I can try to understand what you all are talking about with regard to using C. Oct 22 21:32:57 :-D Oct 22 21:33:58 and this is why it's better to look up memory regions by name rather than hardcoding the index: https://github.com/beagleboard/am335x_pru_package/blob/master/pru_sw/app_loader/interface/__prussdrv.h#L169-L190 Oct 22 21:34:31 on the omap-L1xx the ddr memory region is index 2, on am335x it's index 1 Oct 22 21:35:09 I talked to an engineer at ADI a few years ago, and he said that the AD7779 and AD7771 were identical. They just sold the silicon dies that were kind of messed up and couldn't run as fast when they tested them as the AD7779 (which is a cheaper part). Oct 22 21:35:28 (earlier I mistakenly said it was index 2 on am335x also, I don't know how I managed to forget it's not) Oct 22 21:35:47 hunter2018[m]: yeah that's pretty normal, basically just speed bins Oct 22 21:36:04 wait Oct 22 21:36:16 the faster part is the cheaper part? I assume you mean the slower one is cheaper Oct 22 21:36:53 Yeah, the AD7779 is cheaper and slower Oct 22 21:37:15 ah, not very intuitive part numbering there Oct 22 21:37:26 I know! Oct 22 21:38:20 otoh there's no a-priori reason to assume higher=better .. they're just part numbers Oct 22 21:39:20 If I get the AD7771 working at 128ksps, we can throw away all of our National Instruments DAQs. So that's the real payoff for me. Oct 22 21:39:34 ahh I finally see how I misparsed your sentence, yeah never mind it was just me misreading you Oct 22 21:39:58 I have another weird example where the TS100 soldering iron came before the TS80 (sold as an improvement but actually less powerful) Oct 22 21:43:03 btw hunter2018, do you really need 24bits? Oct 22 21:45:32 as it's possible you're struggling to reach 16ksps because of the ADC itself and with lower resolution you may find a faster one, just a thought Oct 22 21:46:52 Yeah, we actually do need the 24 bits, Oct 22 21:55:21 hunter2018[m]: so, from a memory region (e.g. pruss.ddr), this would obtain the parameters needed on the C/C++ side: https://pastebin.com/JF6449LF Oct 22 22:02:36 Thanks! Oct 22 22:13:02 hunter2018[m]: and some completely untested C/C++ code that takes those values and produces a pointer for you: https://pastebin.com/SMibu62G Oct 22 22:14:23 oh wait I don't take the same arguments here Oct 22 22:14:34 eh, size = end - start Oct 22 22:15:27 though more likely size should be the size of whatever you actually want to map Oct 22 22:31:00 zmatt: I added the config but I get this Oct 22 22:31:00 ssh: Could not resolve hostname bbai: Name or service not known Oct 22 22:31:26 you sure you used the right path for the config file? (~/.ssh/config) Oct 22 22:31:42 let me check **** ENDING LOGGING AT Fri Oct 23 10:59:57 2020